Event 1: my notes.

Scripting Games is chance to learn things. You can learn by good example, or by mistakes, but often – mistake for one person is perfect solution for other. In PowerShell there are usually many ways to skin the cat, some of those I like – some, not so much. So what did I like and dislike so far? Here are few notes I made.

Beginners – likes

Let’s start positive. What I liked? Well, not hard to guess that I liked when people used similar techniques I would use. Puszczam oczko I must say I’m impressed with some approaches. I especially liked idea to create folder in script block passed to –Destination parameter that I’ve seen in one of entries. As you probably know New-Item (or any other command that you would want to use instead, like mkdir function) returns object. And fits perfectly as value for –Destination:

Get-ChildItem *.log | Move-Item -Destination { mkdir Test }

Original was obviously using value of input file to decide what destination folder’s name should be. Also – use of –Force made it all possible.

Another smart idea I’ve seen was using Select-Object to pre-calculate values for parameters:

Get-ChildItem Log\*\*.log | select @{            
    Name = 'Destination'            
    Expression = {            
        "\\NASServer\Archive\{0}" -f $_.Directory.Name            
    }            
}, @{            
    Name = 'Path'            
    Expression = { $_.FullName }            
} | Move-Item -WhatIf

I’ve seen also many people using Join-Path and Split-Path – way better in my books than manual building your paths. It’s true for almost any provider, including ActiveDirectory Provider:

Set-Location AD:\               
Join-Path -ChildPath OU=Test -Path 'DC=monad,DC=ps1'            

No need for concatenating strings here. Finally – I liked when people used well-known tools. I truly believe that Shell in the name of my beloved product is there for a reason. On the other hand: if you want to use built in tools (like robocopy) – don’t hesitate and stop in the middle. Why not:

Robocopy.exe .\Log .\Arch *.log /MIR /MOV /MINAGE:90            

Finally – using format operator. I really like this technique, because it separates form and data. If you ever decide to change data – you don’t have to mess up with form. It also makes it easier to use properties – no need for sub-expressions, you just “inject” value into string:

$Name = 'PowerSheller'            
'Welcome, {0}' -f $Name

Beginner – dislikes

The thing that I’ve seen as worse “sin” was the fact that some attendees made things worse with their script. Scenario informs us about lack of space, and they copy files rather than move old logs. In this world “works” is most important. If you only copied files (without removing them afterwards at least) than in my opinion nothing will make your script look good.

Code-wise the worse thing I’ve seen were wanna-be-one-liners. From code perspective it was obviously not even close to one-liner. Heck – even one-liner, when written for others to read, can be broken into several lines! But if you have several statements separated by semicolon all in one loooong line – you are doing it wrong. Seriously.

Another issue was repeating code. I’m not fan of any kind of code that is (almost) same thing repeated three times. PowerShell supports looping constructs very well, use them for your advantage rather than copy&paste in your editor of choice.

Finally: .NET. Especially frequent usage of .Replace, .Split, .Substring. – really, there are better ways for working with paths than $Path.Split(‘\’)[-1]. If there is cmdlet – try to use it. Use .NET when it’s quicker and obvious like (Get-Date).AddDays(), or necessary.

Advanced – likes

I must admit, I’m very happy with most of the scripts I’ve seen so far. Comment based help is almost always there. Parameters are very often validated (however sometimes I’ve seen some validations that are not needed or won’t work, like [ValidateNotNullOrEmpty()] for integer). Most of scripts use [Parameter()] to get mandatory parameters, or define parameters that are working with pipeline. Also error handling was very good most of the time – I especially liked use of –ErrorVariable parameter and creating own error variable rather than using built in $Error. Using pipeline is also usually very good.

People don’t forget about reusability and try to build functions, however this tasks was relatively “closed” and single function would probably suffice.

Another good thing is that most of the scripts favor Write-Verbose and Write-Debug rather than Write-Host. This is very important especially in PowerShell v3 where former can be redirected to standard output, and latter – can’t.

And finally – something I wrote separate blog post about: usage of WhatIf and Confirm. I really feel that it has to be implemented rather than just enabled. You really shouldn’t leave it for cmdlets to pick up. Some people did it and they got on my favorite list (unless they did something else that made me change my mind. Puszczam oczko)

Advanced – dislikes

Most of the issues I’ve mentioned in part related to beginner category are true for some scripts written in advanced category. What else? List goes on, so fasten your seat belts…

I’m naming-police (in heart, haven’t got any ID or anything else to prove it). So if you violate Three Rules of Command Naming – you are in trouble. First of all, name has three parts: verb, hyphen, noun. There are some exceptions to it, but I hardly believe your function is exceptional enough to join that group. Puszczam oczko Second: verb has to be approved. Backup is not approved verb. Neither is Archive. On top of approved verbs there are also other rules involved, but I won’t go that far. If your verb is approved – it’s fine. If you want to have some tips what verb should you choose – take a look at MSDN doc. Finally – noun. What command do you use to get all processes on your box? Get-Process. And services? Get-Service. So how would you name function that moves some files? If your answer is: “Move-SomeFiles”, start over from “Finally – noun”. Repeat. Puszczam oczko

Next thing is oh-so-famous “if” and comparing boolean to boolean. Why do I think it’s bad? Because $true –eq ‘False’. And because “if” is working on booleans. You already have one, so why would you compare it to anything? if (Test-Path) {} will just work. No need to compare result of this command to $true or $false. If you want to reverse it – just use –not and you are good to go. And if you start comparing booleans to strings, you are asking for troubles.

Another issue I’ve seen is writing scripts that validate parameters. Why would you write your own validation logic, when everything you need is probably there already? Add [Validate*()] to your parameter definition and you are done with it. You won’t have to work hard on friendly error messages. They are built in and consistent. Use them.

On that note: when you use them, be aware that you have no access to other parameters being validated. If you check value of other parameter – you are either looking at not-existing variable, or variable defined in parent scope. If user will pass different value you will only have false negatives or false positives. Either is bad, if your code relies on the condition used.

I also seen script where author was doing #requires –version 3.0 on his own. While this is not something crucial and hand-made solution will work, I really feel that using built-in features is not only easier, requires less work, but is also consistent. Error message makes it clear why given script won’t work on current box.

Next topic can be explained in single word: aliases. If you don’t know why is bad, try to read some really cryptic examples written by others. And imagine you have to modify script you wrote at 3AM, in the middle of the fire.

Finally, something I’ve seen only once, but I must mention it. Being CLI person I really can’t see any value in script that forces me to pass parameters using GUI. If you want GUI – no problem, design one. But if you want to make it reusable tool, make sure people who don’t love mouse so much can simply call your script with inline parameters. Unless, obviously, script is designed with GUI in mind (like picture browser). And that would be it. Pretty long list, but don’t get me wrong: I still think that scripts are pretty good overall. Just not flawless.

9 thoughts on “Event 1: my notes.

  1. Pingback: Few notes written after event 1. | PowerShell.org

  2. I was surprised by the “Move-Item -Destination { mkdir Test }” approach, but I don’t like it. It does allow you to “one-line” it, but it also tries to create the folder for each file. Not a big deal when there are 10 files, but when there are 10,000 that’s a lot of extra overhead.

    A lot of judging is opinion, though, so it’s good to see what people like and don’t like.

    • What I meant is that it’s smart because mkdir outputs object that can be used straight-away for -Destination parameter, no other action required. Obviously, if performance would be main factor this is bad idea.
      But if the performance was the factor, you can always use direct .NET calls… 😉

      measure-command {
      [System.IO.Directory]::CreateDirectory("$pwd\test")
      }

      measure-command {
      mkdir $PWD\Test -Force
      }

  3. Great notes! and thanks for getting them out before Event 2 starts.

    I would like to pose a question for the naming police though: Should we value tradition over usage?

    Here’s what I mean;
    Get-process, get-service, move-item all of these set the precendent that should be standard practice.
    And if we look at what they do the singular form is truly appropriate. They work individually with items, but can be fed groups of items.

    But not all cmd-lets are capable of working with individual items. Here’s 3 examples:
    get-logproperties- gets ALL properties of a log. It has no capability for address a single property
    Close-AllOpenedFiles – closes ALL files opened by the ISE.
    get-GPpermissions – gets ALL permissions for an individual security principle

    The reason these three are exceptions to the naming is because it immediately conveys the scope that they are capable of working with.

    So for our function in event one, we are expressly given the scope of working with ALL files of an individual path. Any successfully entry for this event should not be intended to take a file name as a parameter.

    So… our chosen name for the function be in plural form to express the limitation “this IS NOT intended to work with individual files”?

    Or should we follow a traditional naming structure at the risk of poorly expressing the purpose of the tool we’re creating?

    • Funny that you mention one of commands I’ve used as example in Polish version of my notes… But I did mention it as an example of the opposite: even Microsoft makes mistakes in naming at times and has to fix them in next release. You’ve mentioned Get-GPPermissions? Do you know that this is no longer plural in W2K12? They had to define alias with wrong name, that points to correctly named cmdlet: Get-GPPermission.
      So – no, I truly believe there are no exceptions.
      BTW: tasks was to pick up only subset of files (older than X days), so it could be either 100, 1 or none… 🙂

      • Haha, I wasn’t aware of that.
        Thanks for sharing your thoughts. I still feel like it leaves us with an un-resolved paradox.
        Since I don’t want to make the mistake of picking a fight with you 😉 I’m going to solicite some suggestions on the powershell.org site to see if I can find a way of resolving this paradox.

        ( And yeah, I understand the subset nature of the task, my point is that it examines ALL files in the path for subset selection.
        In fact that’s my exact point. You can’t have a subset of a singular {a file}, you can only have a subset of a plural {a folder’s contents}.)

  4. I screwed up and used Copy-Item because I forgot to change it from my testing, but that was a fair critique the one that I disagreed with in my critiques is whoever assumed Version 3 I don’t even use version 3 Where-Object syntax when I am on a 2012 server or ever. I prefer the long syntax that will work universally.

  5. Pingback: The 2013 Scripting Games, Beginner Event #2 | Mello's IT Musings

Leave a comment