Chocolatey Now has Package Moderation

Well just after three years of having https://chocolatey.org, we’ve finally implemented package moderation. It’s actually quite a huge step forward. This means that when packages are submitted, they will be reviewed and signed off by a moderator before they are allowed to show up and be used by the general public.

What This Means for You Package Consumers

  • Higher quality packages - we are working to ensure by the time a package is live, moderators have given feedback to maintainers and fixes have been added.
  • More appropriate packages - packages that are not really relevant to Chocolatey's community feed will not be approved.
  • More trust - packages are now reviewed for safety and completeness by a small set of trusted moderators before they are live.
  • Reviewing existing packages - All pre-existing packages will be reviewed and duplicates will be phased out.
  • Not Reviewed Warning - Packages that are pre-existing that have not been reviewed will have a warning on chocolatey.org. Since this is considered temporary while we are working through moderation of older packages, we didn't see a need to add a switch to existing choco.

Existing packages that have not been moderated yet will have a warning posted on the package page that looks like

This package was submitted prior to moderation and has not been approved. While it is likely safe for you, there is more risk involved.

Packages that have been moderated will have a nice message on the package page that looks like

This package was approved by moderator mwrock on 10/26/2014.

If the package is rejected, the maintainer will see a message, but no one else will see or be able to install the package.

You should also keep the following in mind:

  • We are not going to moderate prerelease versions of a package as they are not on the stable feed.
  • We are likely only moderating the current version of a package. If you feel older versions should be reviewed, please let us know through contact site admins on the package page.
  • Chocolatey is not going to give you any indication of approved. We expect this to be temporary while we review all existing packages, so we didn’t see much benefit to the amount of work involved to bring it to the choco client in its current implementation.

What This Means for Package Maintainers

  • Guidelines - Please make sure you are following packages guidelines outlined at https://github.com/chocolatey/chocolatey/wiki/createpackages - this is how moderators will evaluate packages
  • Re-push same version - While a package is under review you can continually push up that same version with fixes
  • Email - Expect email communication for moderation - if your email is out of date or you never receive email from chocolatey, ensure it is not going to the spam folder. We will give up to two weeks before we reject a package  for non-responsive maintainers. It's likely we will then review every version of that package as well.
  • Learning about new features - during moderation you may learn about new things you haven't known before.
  • Pre-existing - We are going to be very generous for pre-existing packages. We will start communicating things that will need to be corrected the first time we accept a package, the second update will need to have those items corrected.
  • Push gives no indication of moderation - Choco vCurrent gives no indication that a package went under review. We are going to put out a point release with that message and a couple of small fixes.

Moderation Means a Long Term Future

We are making investments into the long term viability of Chocolatey. These improvements we are making are showing you that your support of the Chocolatey Kickstarter and the future of Chocolatey is a real thing. If you haven’t heard about the kickstarter yet, take a look at https://www.kickstarter.com/projects/ferventcoder/chocolatey-the-alternative-windows-store-like-yum.

Chocolatey Kickstarter–Help Me Take Chocolatey to the Next Level

I’m really excited to tell you about The Chocolatey Experience! We are taking Chocolatey to the next level and ensuring the longevity of the platform. But we can’t get there without your help! Please help me support Chocolatey and all of the improvements we need to make!

 

https://www.kickstarter.com/projects/ferventcoder/chocolatey-the-alternative-windows-store-like-yum

Chocolatey Newsletter

Chocolatey logoChocolatey has some big changes coming in the next few months, so we’ve started a newsletter to keep everyone informed of what’s coming. The folks who are signed up for the newsletter will hear about the latest and greatest changes coming for Chocolatey first, plus they will know when the Kickstarter (Yes! Big changes are coming!) kicks off before anyone else. Sign up for the newsletter now to learn about all the exciting things coming down the pipe for Chocolatey!

Puppet: Getting Started On Windows

Now that we’ve talked a little about Puppet. Let’s see how easy it is to get started.

Install Puppet

PuppetLet’s get Puppet Installed. There are two ways to do that:

  1. With Chocolatey: Open an administrative/elevated command shell and type:
    choco install puppet
  2. Download and install Puppet manually - http://puppetlabs.com/misc/download-options

Run Puppet

  • Let’s make pasting into a console window work with Control + V (like it should):
    choco install wincommandpaste
  • If you have a cmd.exe command shell open, (and chocolatey installed) type:
    RefreshEnv
  • The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!).
  • If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type:
    puppet resource user
  • Output should look similar to a few of these:
    user { 'Administrator':
      ensure  => 'present',
      comment => 'Built-in account for administering the computer/domain',
      groups  => ['Administrators'],
      uid     => 'S-1-5-21-some-numbers-yo-500',
    }
  • Let's create a user:
    puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created
  • Run the 'puppet resource user' command again. Note the user we created is there!
  • Let’s clean up after ourselves and remove that user we just created:
    puppet apply -e "user {'bobbytables_123': ensure => absent, }"
  • Relevant output should look like:
    Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed
  • Run the 'puppet resource user' command one last time. Note we just removed a user!

Conclusion

You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet.

Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

Puppet: Making Windows Awesome Since 2011

imagePuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC.

Puppet Labs is pushing the envelope on Windows. Here are several things to note:

It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources!

Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs.

As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense.

You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources

You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows.

Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

Puppet ACLs–Mask Specific

Access Control Lists and permissions can get inherently complex. We didn’t want to sacrifice a sufficiently advanced administrator/developer/etc from being able to get to advanced scenarios with ACLs with Puppet’s ACL module. With the ACL module (soon to be) out in the wild, it may be helpful to explain one of the significantly advanced features of the ACL module: mask specific rights. I am going to interchangeably use the term “acl” to mean the module during the rest of this post (and not the Access control list or discretionary access control list).

Say you need very granular rights, not just RX (read, execute), but also to read and write attributes. You get read attributes (FILE_READ_ATTRIBUTES) with read (FILE_GENERIC_READ), see http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx.  ACL provides you with the ability to specify ‘full’,’modify’,’write’,’read’,’execute’ or ‘mask_specific’. Mask specific is when you can’t get the specific rights you need for an identity (trustee, group, etc.) and need to get more specific.

Let’s take a look at what mask specific looks like:

acl { 'c:/tempperms':
  permissions => [
   { identity => 'Administrators', rights => ['full'] }, #full is same as - 2032127 aka 0x1f01ff but you should use 'full'
   { identity => 'SYSTEM', rights => ['modify'] }, #modify is same as 1245631 aka 0x1301bf but you should use 'modify'
   { identity => 'Users', rights => ['mask_specific'], mask => '1180073' }, #RX WA #0x1201a9
   { identity => 'Administrator', rights => ['mask_specific'], mask => '1180032' }  #RA,S,WA,Rc #1180032  #0x120180
  ],
  inherit_parent_permissions => 'false',
}

Note specifically that “rights=>[‘mask_specific’]” also comes with a mask integer specified as a string e.g. “mask => ‘1180032’”. Now where did that number come from? In this specific case you see it is RA,S,WA,Rc (Read Attributes, Synchronize, Write Attributes, Read Control). Let’s take a look at http://msdn.microsoft.com/en-us/library/aa394063(v=vs.85).aspx to see the Access Mask values (integer and hex).

SYNCHRONIZE
1048576 (0x100000)

If we look here, 1048576 is the one we want. Let’s whip out our calculators. You knew that math in high school and college was going to be put to good use, right? Okay, calculators out, let’s add those numbers up.

S  = 1048576
Rc =  131072
RA =     128
WA =     256
-------------
     1180032

That’s the same as the number we have above, so we are good. You know how to make mask_specific happen with the acl module should you ever need to. 

Understanding Advanced Permissions

Oh, wait. I should explain a little more advanced scenario. RX, WA – like we started to talk about above. How do you get to that number, where is FILE_GENERIC_READ? Back to http://msdn.microsoft.com/en-us/library/windows/desktop/aa364399(v=vs.85).aspx, we can see that it includes FILE_READ_ATTRIBUTES, FILE_READ_DATA, FILE_READ_EA, STANDARD_RIGHTS_READ, and SYNCHRONIZE. FILE_GENERIC_EXECUTE contains FILE_EXECUTE, FILE_READ_ATTRIBUTES, STANDARD_RIGHTS_EXECUTE, and SYNCHRONIZE. Notice the overlap there? Each one of those flags only get added ONCE. This is important.  If you are following along and looking, you have noticed STANDARD_RIGHTS_READ and STANDARD_RIGHTS_EXECUTE are not listed on the page with the rights. Where did those two come from? Take a look at http://msdn.microsoft.com/en-us/library/windows/desktop/aa374892(v=vs.85).aspx down in the C++ section. See if you notice anything? Wait, what?

STANDARD_RIGHTS_READ, STANDARD_RIGHTS_EXECUTE, and STANDARD_RIGHTS_WRITE are all synonyms for READ_CONTROL. What? Why not just call it read control? I don’t know, I’m not the guy that wrote the Access Masks. Anyway, now we know what we have so let’s get our calculators ready again.

RA    =     128
RD    =       1
REa   =       8
StdRd =  131072
S     = 1048756
FE    =      32
REa   =       8
StdEx =  131072
S     = 1048756

Let’s remove the duplicates (and the tricky READ_CONTROL duplicate).

RA  =     128
RD  =       1
REa =       8
Rc  =  131072
S   = 1048756
FE  =      32
-------------
      1179817

That doesn’t quite work out to what we were thinking of ‘1180073’. Did we forget something? Yes, we got a little wrapped up in just getting RX sorted out that we forgot about WA, which adds another 256 to the number.

RA  =     128
RD  =       1
REA =       8
Rc  =  131072
S   = 1048756
FE  =      32
WA  =     256

-------------
      1180073

 

Parting Thoughts

While the ACL module has a simple interface you can definitely see that it packs some power with it. Having this kind of power is really helpful when you need to get fine-grained with your permissions.

Hacking on Puppet in Windows: Running Puppet Commands Against the Source Code

This is not something one would normally do, but this is here for future reference for me.

First of all ensure puppet, facter and hiera source codes are all checked out from git and have the same top level directory.

Then you take the environment.bat file that is shipped with the puppet installer (in the bin directory), copy it somewhere that you have in the PATH and you edit the first two lines to change the PL_BASEDIR to your top level directory for all of those previous items.

SET PL_BASEDIR=C:\code\puppetlabs
REM Avoid the nasty \..\ littering the paths.
::COMMENT THIS LINE SET PL_BASEDIR=%PL_BASEDIR:\bin\..=%

Then copy the puppet.bat file over to the same directory as your modified environment.bat file and you are money.

Don’t have those files? No problem, I’ve created a Gist for that.

Puppet for the Win[dows]!

PuppetConf 2013

I recently attended PuppetConf 2013 (the 3rd annual event) and all I can say coming away from that is wow. It was an amazing event with quite a few amazing speakers and sessions out there. There were over 100 speakers and more than 1200 attendees. And we had live streaming for quite a few sessions and keynotes that had a huge attendance (I don’t remember the number off the top of my head). With seven tracks going at a time, not including demos or hands on labs, it was quite an event.

Disclaimer: I work for Puppet Labs but my opinions are my own.

The venue was awesome (San Francisco at the Fairmont Hotel) and I wished that I had a little more time outside of the conference to go exploring. Being there as an attendee, speaker, employee, and volunteer, I saw all sides of the conference. Everything was well prepared and I saw no hiccups from any side. Walking around at some of the events I could hear a buzz in the air about Windows and I happened to overhear a few folks mention the word chocolatey, which was definitely cool considering the majority of folks that are at PuppetConf are mainly Linux with some mixing of environments. I’m hoping to see that start to tip next year.

There were 4 talks on Windows and I was able to make it to almost all of them (5 talks if you consider my hands on lab a talk). Only two of those were given by puppets, so it was nice to see some talks considering there were none last year (I need to verify this).

My Hands On Lab – Getting Chocolatey (Windows Package Provider) with Puppet

Link: http://puppetconf2013b.sched.org/event/ddd309df1b03712cf1ba39224ad5e852#.Uht-a2RgbVM

The hands on lab did not go so well. Apologies to the attendees of the lab, but there was an issue with the virtual machine that I had provided. It was corrupted somewhere between copying it from my box to all of the USB sticks that we gave to lab attendees. Since it was only a 40 minute lab, we had to switch to a quick demo.

I did promise those folks that I would get them a functional hands on lab and here it is: https://github.com/chocolatey/puppet-chocolatey-handsonlab (You can take advantage of it as well for free!).

My Talk – Puppet On Windows: Now You’re Getting Chocolatey!

Link: http://puppetconf2013b.sched.org/event/ecfda2ef5c398eca29b00ce756cd405d#.Uht_7GRgbVM

My talk went very smoothly. It was almost night and day having given a failing lab a little over an hour prior to a talk that had quite a bit of energy in the room. I enjoyed the feedback coming from the audience and the session went (I felt) very well. Sessions were recorded so be on the lookout for that to show up soon.  Until then you can check out the slides here: http://www.slideshare.net/ferventcoder/puppet-on-windows-now-youre-getting-chocolatey-puppetconf2013 – and if you came to the session, I’d appreciate feedback on how I did and where I can improve. You can do that here: http://speakerrate.com/talks/25271-puppet-on-windows-now-you-re-getting-chocolatey

Career-Defining Moments

Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities.

In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it?

I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities.

In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome.

In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit!

Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard.

Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there.

Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

Chocolatey official public feed now has 1,000 stable packages

image

Chocolatey has reached a milestone at 1K unique stable packages! When I started chocolatey a little over two years ago I didn't know there would be such a tremendous community uptake. I am blessed that you have found value in chocolatey and have contributed code, packages, bugs and ideas to making chocolatey better.

To celebrate this we should look at who contributed the package that put us over the top. It was Justin Dearing with SqlKerberosConfigMgr (http://chocolatey.org/packages/SqlKerberosConfigMgr). And I'm giving Justin a $50 gift card for Amazon as a small token of my appreciation. It's not much but we appreciate the contributions! This was unannounced because we want to focus on quality, not quantity.

Now, while this is a significant milestone, we are not very far in the bigger scheme of offerings for Windows. There is no hurry to get there, we prefer quality packages over quantity of packages. We will eventually grow much bigger and as we add additional sources, it increases the amount of packages we can offer.

Thanks so much to all of you for all of your work, we wouldn't be where we are today without the community!

Chocolatey Automatic Packages

I updated three packages this morning. I didn’t even notice until the tweets came in from @chocolateynuget.

How is this possible? It’s simple. I love automation. I built chocolatey to take advantage of automation. So it would make sense that we could automate checking for package updates and publishing those updated packages. These are known as automatic packages. Automatic packages are what set Chocolatey apart from other package managers and I daresay could make chocolatey one of the most up-to-date package manager on Windows.

Automatic Packages You Say?

You’ve followed the instructions for creating a Github (or really any source control) repository with your packages. All you need to do now is to introduce two new utilities to your personal library, Ketarin and Chocolatey Package Updater (chocopkgup for short).

Ketarin

Ketarin is a small application which automatically updates setup packages. As opposed to other tools, Ketarin is not meant to keep your system up-to-date, but rather maintain a compilation of all important setup packages which can be burned to disc or put on a USB stick.

There are some good articles out there that talk about how to create jobs with Ketarin so I am not going to go into that.

Ketarin does a fantastic job of checking sites for updates and has hooks to give it custom command before and after it has downloaded the latest version of an app/tool.

Chocolatey Package Updater

Chocolatey Package Updater aka chocopkgup takes the information given out from Ketarin about a tool/app update and translates it into a chocolatey package that it builds and pushes to chocolatey.org. It does this so you don't even have to think about updating a package or keeping it up to date. It just happens. Automatically, in the background, and even faster than you could make it happen. It's almost as if you were the application/tool author.

How To

Prerequisites And Setup:

  1. Optional (strongly recommended) - Ensure you are using a source control repository and file system for keeping packages. A good example is here.
  2. Optional (strongly recommended) - Make sure you have installed the chocolatey package templates. If you’ve installed the chocolatey templates (ReadMe has instructions), then all you need to do is take a look at the chocolateyauto and chocolateyauto3. You will note this looks almost exactly like the regular chocolatey template, except this has some specially named token values.
    #Items that could be replaced based on what you call chocopkgup.exe with
    #{{PackageName}} - Package Name (should be same as nuspec file and folder) |/p
    #{{PackageVersion}} - The updated version | /v
    #{{DownloadUrl}} - The url for the native file | /u
    #{{PackageFilePath}} - Downloaded file if including it in package | /pp
    #{{PackageGuid}} - This will be used later | /pg
    #{{DownloadUrlx64}} - The 64bit url for the native file | /u64
  3. These are the tokens that chocopkgup will replace when it generates an instance of a package.
  4. Install chocopkgup (which will install ketarin and nuget.commandline). cinst chocolateypackageupdater.
  5. Check the config in C:\tools\ChocolateyPackageUpdater\chocopkgup.exe.config  (or chocolatey_bin_root/ChocolateyPackageUpdater). The PackagesFolder key should point to where your repository is located.
  6. Create a scheduled task (in windows). This is the command (edit the path to cmd.exe accordingly): C:\Windows\System32\cmd.exe /c c:\tools\chocolateypackageupdater\ketarinupdate.cmd
  7. Choose a schedule for the task. I run mine once a day but you can set it to run more often. Choose a time when the computer is not that busy.
  8. Save this Ketarin template somewhere: https://github.com/ferventcoder/chocolateyautomaticpackages/blob/master/_template/KetarinChocolateyTemplate.xml
  9. Open Ketarin. Choose File –> Settings.
  10. On the General Tab we are going to add the Version Column for all jobs. Click Add…, then put Version in Column name and {version} in Column value. 
       Create a Custom Field (Ketarin)
  11. Click [OK]. This should add it to the list of Custom Columns.
  12. Click on the Commands Tab and set Edit command for event to “Before updating an application”. 
    Ketarin settings - Commands Tab - Before updating an application
  13. Add the following text:
    chocopkgup /p {appname} /v {version} /u "{preupdate-url}" /u64 "{url64}" /pp "{file}" 
    REM /disablepush
  14. Check the bottom of this section to be sure it set to Command
    Command selected
  15. Click Okay.
  16. Note the commented out /disablepush. This is so you can create a few packages and test that everything is working well before actually pushing those packages up to chocolatey. You may want to add that switch to the main command above it.

This gets Ketarin all set up with a global command for all packages we create. If you want to use Ketarin outside of chocolatey, all you need to do is remove the global setting for Before updating an application and instead apply it to every job that pertains to chocolatey update.

Create an Automatic Package:

Preferably you are taking an existing package that you have tested and converting it to an automatic package.

  1. Open Ketarin. Choose File –> Import… 
  2. Choose the template you just saved earlier (KetarinChocolateyTemplate.xml).
  3. Answer the questions. This will create a new job for Ketarin to check.
  4. One important thing to keep in mind is that the name of the Application name needs to match the name of the package folder exactly.
  5. Right click on that new job and select Edit. Take a look at the following:
    Ketarin Job Notes
  6. Set the URL appropriately. I would shy away from FileHippo for now, the URL has been known to change and if you upload that as the download url in a chocolatey packages, it won’t work very well.
  7. Click on Variables on the right of URL.
    Variables
  8. On the left side you should see a variable for version and one for url64. Click on version.
  9. Choose the appropriate method for you. Here I’ve chosen Content from URL (start/end).
  10. Enter the URL for versioning information.
    Ketarin Variable Details
  11. In the contents itself, highlight enough good information before a version to be able to select it uniquely during updates (but not so much it doesn’t work every time as the page changes). Click on Use selection as start.
  12. Now observe that it didn’t jump back too far.
  13. Do the same with the ending part, keeping in mind that this side doesn’t need to be too much because it is found AFTER the start. Once selected click on Use selection as end.
  14. It should look somewhat similar to have is presented in the picture above.
  15. If you have a 64bit Url you want to get, do the same for the url64 variable.
  16. When all of this is good, click OK.
  17. Click OK again.

Testing Ketarin/ChocoPkgUp:

  1. We need to get a good idea of whether this will work or not.
  2. We’ve set /disablepush in Ketarin global so that it only goes as far as creating packages.
  3. Navigate to C:\ProgramData\chocolateypackageupdater.
  4. Open Ketarin, find your job, and right click Update.  If everything is set good, in moments you will have a chocolatey package in the chocopkgup folder. 
  5. Inspect the resulting chocolatey package(s) for any issues.
  6. You should also test the scheduled task works appropriately.

Troubleshooting/Notes

  • Ketarin comes with a logging facility so you can see what it is doing. It’s under View –> Show Log.
  • In the top level folder for chocopkgup (in program data), we log what we receive from Ketarin as well and the process of putting together a package.
  • The name of the application in ketarin matches exactly that of the folder that is in the automatic packages folder.
  • Every once in awhile you want to look in Ketarin to see what jobs might be failing. Then figure out why.
  • Every once in awhile you will want to inspect the chocopkgupfolder to see if there are any packages that did not make it up for some reason or another and then upload them.

Conclusion

Automatic chocolatey packages are a great way to grow the number of packages you maintain without any significant jump in maintenance cost by you. I’ve been working with and using automatic packages for over six months. Is it perfect? No, it has issues from time to time (getting a good version read or actually publishing the packages in some rare cases). But it works pretty well. Over the coming months more features will be added to chocopkgup, such as been able to run its own PowerShell script (for downloading components to include in the package, etc) that would not end up in the final chocolatey package.

With full automation instead of having packages that are out of date or no longer valid, you run the small chance that something changed in the install script or something no longer works. The chances of this are much, much lower than having packages that are out of date or no longer valid.

It takes just a few minutes longer when creating packages to convert them to automatic packages but well worth it when you see that you are keeping applications and tools up to date on chocolatey without any additional effort on your part. Automatic packages are awesome!

this.Log– Source, NuGet Package & Performance

Recently I mentioned this.Log. Given the amount of folks that were interested in this.Log, I decided to pull this source out and make a NuGet package (well, several packages).

Source

The source is now located at https://github.com/ferventcoder/this.log. Please feel free to send pull requests (with tests of course). When you clone it, if you open visual studio prior to running build.bat, you will notice build errors. Don’t send me a pull request fixing this, I want it to work the way it does now. Use build.bat appropriately.

To try to cut down on the version number being listed everywhere, I created a SharedAssembly.cs (and a SharedAssembly.vb for the VB.NET samples). That helped, but it didn’t solve the problem where it was in the nuspecs as dependencies. So I took it a step further and created a file named VERSION. When you run the build, it updates all the files that contain version information. Having one place to handle the version is nice.

NuGet

When moving this.Log to a NuGet package (or in this case 9 NuGet packages), I was able to play with some features of NuGet I had not previously, symbol servers and packing a csproj. With packing a csproj, I was able to quickly (well mostly) set up the build to package up every project with NuGet packages.

All packages can be found by searching for this.log on NuGet.org.

NOTE: If you’ve installed any of these prior to this post, you will want to uninstall and reinstall them (there was an particular issue with the version on the Rhino Mocks version). I’ve fixed and updated quite a bit on them from version 0.0.1.0 to 0.0.2.0.

Performance

Performance testing with log4net showed this only has an overhead of 42 ticks tested over 100,000 iterations. That’s a pretty good start given that it has a reflection hit on every call.

Introducing this.Log

One of my favorite creations over the past year has been this.Log(). It works everywhere including static methods and in razor views. Everything about how to create it and set it up is in this gist.

How it looks

public class SomeClass {
 
  public void SomeMethod() {
    this.Log().Info(() => "Here is a log message with params which can be in Razor Views as well: '{0}'".FormatWith(typeof(SomeClass).Name));

    this.Log().Debug("I don't have to be delayed execution or have parameters either");
  }

  public static void StaticMethod() {
    "SomeClass".Log().Error("This is crazy, right?!");
  }
 
}

Why It’s Awesome

  • It does no logging if you don’t have a logging engine set up.
  • It works everywhere in your code base (where you can write C#). This means in your razor views as well!
  • It uses deferred execution, which means you don’t have to mock it to use it with testing (your tests won’t fail on logging lines).
  • You can mock it easily and use that as a means of testing.
  • You have no references to your actual logging engine anywhere in your codebase, so swapping it out (or upgrading) becomes a localized event to one class where you provide the adapter.

Some Internals

This uses the awesome static logging gateway that JP Boodhoo showed me a long time ago at a developer bootcamp, except it takes the concept further. One thing that always bothered me about the static logging gateway is that it would construct an object EVERY time you called the logger if you were using anything but log4net or NLog. Internally it likely continued to reuse the same object, but at the codebase level it appeared as that was not so.

/// <summary>
/// Logger type initialization
/// </summary>
public static class Log
{
    private static Type _logType = typeof(NullLog);
    private static ILog _logger;
 
    /// <summary>
    /// Sets up logging to be with a certain type
    /// </summary>
    /// <typeparam name="T">The type of ILog for the application to use</typeparam>
    public static void InitializeWith<T>() where T : ILog, new()
    {
        _logType = typeof(T);
    }
 
    /// <summary>
    /// Sets up logging to be with a certain instance. The other method is preferred.
    /// </summary>
    /// <param name="loggerType">Type of the logger.</param>
    /// <remarks>This is mostly geared towards testing</remarks>
    public static void InitializeWith(ILog loggerType)
    {
        _logType = loggerType.GetType();
        _logger = loggerType;
    }
 
    /// <summary>
    /// Initializes a new instance of a logger for an object.
    /// This should be done only once per object name.
    /// </summary>
    /// <param name="objectName">Name of the object.</param>
    /// <returns>ILog instance for an object if log type has been intialized; otherwise null</returns>
    public static ILog GetLoggerFor(string objectName)
    {
        var logger = _logger;
 
        if (_logger == null)
        {
            logger = Activator.CreateInstance(_logType) as ILog;
            if (logger != null)
            {
                logger.InitializeFor(objectName);
            }
        }
 
        return logger;
    }
}

You see how when it calls InitializeFor, that’s when you get something like the following in the actual implemented method:

_logger = LogManager.GetLogger(loggerName);

So we take the idea a step further by implementing the following in the root namespace of our project:

/// <summary>
/// Extensions to help make logging awesome
/// </summary>
public static class LogExtensions
{
    /// <summary>
    /// Concurrent dictionary that ensures only one instance of a logger for a type.
    /// </summary>
    private static readonly Lazy<ConcurrentDictionary<string,ILog>> _dictionary = new Lazy<ConcurrentDictionary<string, ILog>>(()=>new ConcurrentDictionary<string, ILog>());
 
    /// <summary>
    /// Gets the logger for <see cref="T"/>.
    /// </summary>
    /// <typeparam name="T"></typeparam>
    /// <param name="type">The type to get the logger for.</param>
    /// <returns>Instance of a logger for the object.</returns>
    public static ILog Log<T>(this T type)
    {
        string objectName = typeof(T).FullName;
        return Log(objectName);
   }
 
    /// <summary>
    /// Gets the logger for the specified object name.
    /// </summary>
    /// <param name="objectName">Either use the fully qualified object name or the short. If used with Log&lt;T&gt;() you must use the fully qualified object name"/></param>
    /// <returns>Instance of a logger for the object.</returns>
    public static ILog Log(this string objectName)
    {
        return _dictionary.Value.GetOrAdd(objectName, Infrastructure.Logging.Log.GetLoggerFor);
    }
}

You can see I’m using a concurrent dictionary which really speeds up the operation of going and getting a logger. I get the initial performance hit the first time I add the object, but from there it’s really fast. I do take a hit with a reflection call every time, but this is acceptable for me since I’ve been doing that with most logging engines for awhile.

Conclusion

If you are interested in the details, see this gist.

Extensions are awesome if used sparingly. Is this.Log perfect? Probably not, but it does have a lot of benefits in use. Feel free to take my work and make it better. Find a way to get me away from the reflection call every time. I’ve been using it for almost a year now and have improved it a little here and there.

If there is enough interest, I can create a NuGet package with this as well.

Super D to the B to the A – AKA Script for reducing the size of a database

The following is a script that I used to help me clean up a database and reduce the size of it from 95MB down to 3MB so we could use it for a development backup. I will note that we also removed some of the data out. I shared this with a friend recently and he used this to go from 70GB to 7GB!

UPDATE: Special Note

Please don’t run this against something that is live or performance critical. You want to do this where you are the only person connected to the database, like a restored backup of the critical database. Doing it against something live will most definitely cause issues. I can in no way be responsible for the use of this script. You should understand what you are doing before you execute these scripts.

So what does it do?

  • It gives you a report of what tables are taking up the most space.
  • It allows you to specify those tables for cleaning.
  • Gives you that same report of space used up by tables after the clean.
  • It rebuilds and reorganizes all indexes with reports before and after.
  • It runs shrink file on the physical files (potentially unnecessary due to the next thing it does, but hey, couldn’t hurt right?!).
  • It runs shrink database on the database.

The Script

Provided it shows up correctly, here is the gist:

/*
 * Scripts to remove data you don't need here  
 */


/*
 * Now let's clean that DB up!
 */

DECLARE @DBName VarChar(25)
SET @DBName = 'DBName'

/*
 * Start with DBCC CLEANTABLE on the biggest offenders
 */


--http://stackoverflow.com/questions/3927231/how-can-you-tell-what-tables-are-taking-up-the-most-space-in-a-sql-server-2005-d
--http://stackoverflow.com/a/3927275/18475
PRINT 'Looking at the largest tables in the database.'
SELECT 
 t.NAME AS TableName,
 i.name AS indexName,
 SUM(p.rows) AS RowCounts,
 SUM(a.total_pages) AS TotalPages, 
 SUM(a.used_pages) AS UsedPages, 
 SUM(a.data_pages) AS DataPages,
 (SUM(a.total_pages) * 8) / 1024 AS TotalSpaceMB, 
 (SUM(a.used_pages) * 8) / 1024 AS UsedSpaceMB, 
 (SUM(a.data_pages) * 8) / 1024 AS DataSpaceMB
FROM 
 sys.tables t
INNER JOIN  
 sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN 
 sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN 
 sys.allocation_units a ON p.partition_id = a.container_id
WHERE 
 t.NAME NOT LIKE 'dt%' AND
 i.OBJECT_ID > 255 AND  
 i.index_id <= 1
GROUP BY 
 t.NAME, i.object_id, i.index_id, i.name 
ORDER BY 
 OBJECT_NAME(i.object_id) 

 --http://weblogs.sqlteam.com/joew/archive/2008/01/14/60456.aspx
PRINT 'Cleaning the biggest offenders'
DBCC CLEANTABLE(@DBName, 'dbo.Table1')
DBCC CLEANTABLE(@DBName, 'dbo.Table2')

SELECT 
 t.NAME AS TableName,
 i.name AS indexName,
 SUM(p.rows) AS RowCounts,
 SUM(a.total_pages) AS TotalPages, 
 SUM(a.used_pages) AS UsedPages, 
 SUM(a.data_pages) AS DataPages,
 (SUM(a.total_pages) * 8) / 1024 AS TotalSpaceMB, 
 (SUM(a.used_pages) * 8) / 1024 AS UsedSpaceMB, 
 (SUM(a.data_pages) * 8) / 1024 AS DataSpaceMB
FROM 
 sys.tables t
INNER JOIN  
 sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN 
 sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN 
 sys.allocation_units a ON p.partition_id = a.container_id
WHERE 
 t.NAME NOT LIKE 'dt%' AND
 i.OBJECT_ID > 255 AND  
 i.index_id <= 1
GROUP BY 
 t.NAME, i.object_id, i.index_id, i.name 
ORDER BY 
 OBJECT_NAME(i.object_id) 

/*
 * Fix the Index Fragmentation and reduce the number of pages you are using (Let's rebuild and reorg those indexes)
 */


--http://ferventcoder.com/archive/2009/06/09/sql-server-2005-sql-server-2008---rebuild-or-reorganize.aspx 
PRINT 'Selecting Index Fragmentation in ' + @DBName + '.'
SELECT 
  DB_NAME(DPS.DATABASE_ID) AS [DatabaseName]
 ,OBJECT_NAME(DPS.OBJECT_ID) AS TableName
 ,SI.NAME AS IndexName
 ,DPS.INDEX_TYPE_DESC AS IndexType
 ,DPS.AVG_FRAGMENTATION_IN_PERCENT AS AvgPageFragmentation
 ,DPS.PAGE_COUNT AS PageCounts
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) DPS --N'LIMITED') DPS
INNER JOIN sysindexes SI 
    ON DPS.OBJECT_ID = SI.ID 
    AND DPS.INDEX_ID = SI.INDID
ORDER BY DPS.avg_fragmentation_in_percent DESC


PRINT 'Rebuilding indexes on every table.'
EXEC sp_MSforeachtable @command1="print 'Rebuilding indexes for ?' ALTER INDEX ALL ON ? REBUILD WITH (FILLFACTOR = 90)"
GO
PRINT 'Reorganizing indexes on every table.'
EXEC sp_MSforeachtable @command1="print 'Reorganizing indexes for ?' ALTER INDEX ALL ON ? REORGANIZE"
GO
--EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)"
--GO
PRINT 'Updating statistics'
EXEC sp_updatestats
GO

SELECT 
  DB_NAME(DPS.DATABASE_ID) AS [DatabaseName]
 ,OBJECT_NAME(DPS.OBJECT_ID) AS TableName
 ,SI.NAME AS IndexName
 ,DPS.INDEX_TYPE_DESC AS IndexType
 ,DPS.AVG_FRAGMENTATION_IN_PERCENT AS AvgPageFragmentation
 ,DPS.PAGE_COUNT AS PageCounts
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) DPS --N'LIMITED') DPS
INNER JOIN sysindexes SI 
    ON DPS.OBJECT_ID = SI.ID 
    AND DPS.INDEX_ID = SI.INDID
ORDER BY DPS.avg_fragmentation_in_percent DESC
GO

/*
 * Now to really compact it down. It's likely that SHRINKDATABASE will do the work of SHRINKFILE rendering it unnecessary but it can't hurt right? Am I right?!
 */

DECLARE @DBName VarChar(25), @DBFileName VarChar(25), @DBLogFileName VarChar(25)
SET @DBName = 'DBName'
SET @DBFileName = @DBName
SET @DBLogFileName = @DBFileName + '_Log'

DBCC SHRINKFILE(@DBLogFileName,1)
DBCC SHRINKFILE(@DBFileName,1)
DBCC SHRINKDATABASE(@DBName,1) 

References

Here are some of the references in the gist:

Refresh Database–Speed up Your Development Cycles

Refresh database is an workflow that allows you to develop with a migrations framework, but deploy with SQL files. It’s more than that, it allows you to rapidly make changes to your environment and sync up with other teammates. When I am talking about environment, I mean your local development environment: your code base and the local database back end you are hitting.

Refresh database comes in two flavors, one for NHibernate and one for Entity Framework. I’m going to show you an example of the one for Entity Framework, which you can find in the repository for rh-ef on github.  One note before we get started: This could work with any migrations framework that will output SQL files.

What is this? Why should I use this?

How long do you spend updating source code and then getting your database up to snuff afterward so you can keep moving forward quickly? Do you work with teammates? Do you have multiple workstations that you might work from and want to quickly sync up your work?

It’s a pain most of us don’t see and an idea that was originally incubated by Dru Sellers. He wanted a fast way of keeping his local stuff up to date right from Visual Studio. Out of that was born Refresh Database. We are talking a simple right click and debug to a synced up database.

Others have talked in the past about how you want to use the same migration algorithm and test it all the way up to production. Refresh DB allows you to test that migration from a local development environment many times a day. So by the time you hand over the SQL files for production (or use RoundhousE), there is no guess work about whether it is going to work or not. You have a security in knowing that you are good to go.

It’s definitely something that can really speed up your team so you never hear “I got latest and now I’m trying to sync up all the changes to the database.” This should be easy. This should be automatic.

You should never again hear “I made some domain changes but now I’m working to get them into the database.” This should be easy. This should be automatic.

Whether you decide to look further into this or not, it doesn’t matter to me. It just means my teams will get to market and keep updated faster than you (given the same technologies, Winking smile).

How does this work?

This is the simple part. Convincing you to look at it in the first place is the hard part. I have put together a short video to show you exactly how it works. You will see that it is super simple.

Conclusion

Refresh Database has been around for over two years. It’s definitely something that has paid for itself time and again. It’s something you might consider looking at it if you have never heard of it.

If you don’t do something with migrations and source control for your database yet, please start now. This will save you countless hours in the future. I’ve walked into more than one company that was hurting in the area of database development b/c they didn’t treat the database scripts as source code in the same way that they did the rest of the code. It’s a must anymore. I also see teams doing shared development database development. This is a huge no no (except in certain considerations) due to the amount of lost time it causes. That however, is a discussion for another day.

HowTo: Use .NET Code on a Network Share From Windows

If you use VMWare/VirtualPC and you want to offload your source code repositories to your host OS and code from it inside the VM, you need to do a few things to fully trust the share.

I’ve found that I keep heading out and searching on this every time I need it so I thought I would write it down this time to save myself the trouble next time.

CasPol Changes

Save the following as caspol.bat:

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework\v4.0.30319\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://e:/* FullTrust

%WINDIR%\Microsoft.NET\Framework\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework\v4.0.30319\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust
%WINDIR%\Microsoft.NET\Framework64\v2.0.50727\caspol -q -machine -ag 1.2 -url file://\\vmware-host\* FullTrust

Make sure you replace the file locations appropriately. Then run it as an administrator.

This will do the first part of allowing your code to execute without security exceptions. Credit to Chris Sells for the most comprehensive article on this: http://www.sellsbrothers.com/Posts/Details/1519 

Make VMWare Share Part of the Local Intranet

This is one I’ve found to get stuff to build that I didn’t find anywhere else. Even after running caspol I still couldn’t run executables on the share. That is, until I made the share part of the Local Intranet.

  • Open Internet Explorer, then open Internet Options.
  • Find the Security Tab
  • Open Local Intranet by selecting Local Intranet and pushing the Sites button
  • Click Advanced
  • Now add file://vwmare-host to this file
  • Click Close when completed
  • There is a picture below for reference

Setting Local Intranet

 

This will allow for executables to start working. All but the ones built and run from Visual Studio.

.NET Built Executables/Services No Longer Work

It may be awhile before you run into this one. You may have a console application you are building. You will notice once you move over to the share, you start getting errors related to that. What you need to do is add a small configuration value to the the config files.

Add the following to your config files:

<runtime>
  <loadFromRemoteSources enabled="true" />
</runtime>

This will allow it to be loaded into memory, otherwise it will not run from a network share.

Caveats to Network Share

Caveats to think about when developing against a share:

  • Visual Studio has trouble noticing updates to files if you update them outside of Visual Studio.
  • If you run the local built in web server for web development, don’t expect it to catch the files updating automatically.
  • If you do any kind of restoring a database from a backup, you may want to consider copying that database to a local drive first.

Chocolatey featured on LifeHacker!

Chocolatey was just featured on LifeHacker! http://lifehacker.com/5942417/chocolatey-brings-lightning-quick-linux+style-package-management-to-windows

I was ecstatic to hear about this, of course now I need to write an actual comparison between chocolatey and other windows package managers.

Comments on Reddit: http://www.reddit.com/r/commandline/comments/zqnj6/chocolatey_brings_lightning_quick_linuxstyle/

How To: Improve Skype Quality

I always forget this until I need it the next time, but there is a great post that talks about how to fix your skype quality. http://pauloflaherty.com/2008/03/26/improve-skype-quality-with-these-tips/

1. In Skype, Go to Tools > Options > Connection. Select the option to use ports 80 and 443. In the “Incoming Connections” box you can chose any port between 1024 and 65535.

2. Reconfirm that your firewall is correctly configured. Follow the simple visual guide here:

http://www.skype.com/help/guides/firewalls/

3. Quit any file sharing applications or high-bandwidth usage applications.

4. For more detailed security setup on a network:http://www.skype.com/security/guide-for-network-admins.pdf (does not work anymore)

5. If these suggestions do not resolve improve call quality, follow these steps:

* Quit Skype

* Locate the shared.xml file found in
C:\Documents and settings\Your Windows Username\Application data\Skype\shared.xml

* Delete the file called shared.xml

* Restart Skype ( shared.xml will be recreated )
Note: Showing hidden folders and files has to be turned on: please navigate to :

In XP – My Computer > Tools (Menu) > Folder Options > View.

In Windows 7 – Open Explorer >Organize (Menu)> Folder And Search Options > View.

Once there, please make sure that the option “Show Hidden Files and Folders” is enabled.

6. Disable Quality of Service packet scheduling. Go to Start -> Control Panel -> Network Connections.  Right click on the connection you are using. Select Properties. Untick the “QoS Packet Scheduler” option.

I do steps 1-3, 5, and 6. For step 2, please make sure your firewall is port forwarding your skype port to the proper computer. That is where you get the best performance.

Step 5 is a maintenance one that you will find yourself doing from time to time when things start to slow down. Instead of deleting shared.xml, I just append the date to the end of it in YYYMMDD format.

Hope this helps someone that is trying to improve conversations with Skype.

Entity Framework and Stored Procedures Issue - Unable to determine a valid ordering for dependent operations. Dependencies may exist due to foreign key constraints, model requirements, or store-generated values

When working with EF Database First (don’t ask) and mapping stored procedures you may run into this issue.

Julie Lerman has written a great story on how to do the mappings and has some code to download to inspect how to set up the mappings for insert, update, and delete appropriately for use with stored procedures (http://msdn.microsoft.com/en-us/data/gg699321.aspx).

You may have searched everywhere else and have not been able to find a satisfactory answer. In some cases your model has a circular dependency and there are multiple search results that will help you with that out there.

In my case the problem came down to using a Manage type sproc that would handle both insert and update. As you can imagine you would pass in the primary key field to the Sproc no matter what.

Entity Framework believes this is an association (possibly to a foreign key) so it gives the error above. When you convert it to using separate Insert and Update stored procedures where the insert does not pass in the PK, everything works appropriately.

So if you are getting the above error, make sure you are not mapping the PK in the insert procedure.

Hope this helps some poor soul who falls upon this issue.

Remote Work: Placeshift and Stay Highly Collaborative Part 2–Focus on YOU

Companies want to hire the type of person that is cut out to be a remote worker. The type of person that can be a remote worker is the type of person that excels at their work and that is what companies are always looking for.

In the first part of this series we talked about what remote work is and how a business benefits from remote workers. In this article we are going to focus on you. What does it take to be a remote worker? Is remote work possible in your job? How do you work from home when there are distractions?

NOTE: The following is not a definitive list and not true for every situation. Some of this represents what works well for me in my experiences over the last few years.

Can YOU be a Remote Worker?

Are you the remote worker type? This is always an interesting question. You can learn behaviors, I don’t believe this is a type that you are either born into or not. I think this is something you can learn and become if you just know how. So what are the key behaviors to being a remote worker? Surprisingly they are strikingly similar to what companies prefer in their best workforce:

  • Self-Sufficient
  • Self-starting
  • Disciplined / Focused
  • Motivated

With that in mind, it’s not a huge gap that companies would actually want to hire the type of worker that is cut out to be a remote worker. “Wait a minute, didn’t you say this was learned?” Yes, many of these are learned behaviors. Let’s take a look at each in more detail.

Self-Sufficient

You must know how to do your job well enough that you don’t need someone helping you through the work (until you can stand on your own). This doesn’t mean you need zero guidance; we all need help from time to time. I won’t put a number on what determines self-sufficiency, I think most of us know that we know our jobs or not. If not you can probably ask your peers if they feel you are self-sufficient or not.

If you are paying attention, you may have just realized that this means someone “junior” (or just starting out in a new industry) should not be a remote worker. Why? To be able to work effectively without high amounts of guidance usually comes when you have good knowledge of how to do your job and how to do it well.

Folks new to an industry should probably shy away from trying remote work until they are more comfortable in their roles, responsibilities and simply put, skills and abilities. The biggest reason a “junior” worker should shy away from remote work is that the most important career objective for them is to learn and that is harder to do when they are remote. As a junior worker you should want to pair with others to learn how to do things better. The best type of learning is always face to face. It’s hard enough to teach someone face to face, doing it remotely compounds all of the issues that come along with paying attention to the non-verbal cues of whether someone is catching on or not.

Self-Starting

When you are not physically around others working towards the same goals it can sometimes be unclear what you should be doing. Keep in mind it is not the company’s job to make sure you have something to do. It is your responsibility. To be actively engaged, you need to take an active role in making sure you have work to do. Being a self-starter is a high value to a company because they know you are not just going to sit around and wait for something to do. You are going to ask when you need something to do so. That means you are producing something for the company to offset your costs to the company. You want to make the company more money than the opportunity cost of you. This makes you valuable to the company, which is especially important when you are not physically present.

Disciplined / Focused

Discipline and focus means you can work when there are distractions around you. Does that mean you don’t work to eliminate distractions? No, in fact, eliminating distractions is extremely important for many of us. Being able to concentrate with distractions can be very difficult and stressful in the long term, so I would highly recommend removing distractions. How do you do that? We’ll get to that when we talk about how you help your home support remote work.

Discipline is a learned behavior. How do I know? Like many folks I know, I’ve been in the military. I have seen first hand how people become disciplined. It’s really a matter of habit. You do the same thing over and over until it just becomes a habit. So if you want to be disciplined you just practice discipline for some amount of time (some say 21 days straight) and from then on it will become a habit.

Focus is a little harder to achieve. I believe with discipline comes focus. If you are distracted by the twitters and you have the discipline to only turn it on at breaks, you can achieve focus on what you are working on when you don’t have it on. You gain focus by removing distractions until all there is in front of you is what you need to accomplish.

Motivated

I left this last because motivation is a weird animal. People are motivated for different reasons. It’s really about learning what motivates you. To be motivated in this sense really just means to accomplish goals of the company.

When I was in high school I remember listening to a Tony Robbins lesson on how you can categorize folks into two types of motivation: positive and negative. I can’t find the source of this but it boils down people being motivated in two ways, pain and pleasure. You cannot motivate a person that is negatively motivated with positive reinforcement and you would offend a positively motivated person with negative reinforcement. I digress. The point I’m trying to make here is to find what motivates you and adapt some if needed so that it aligns with the goals of the company that you work with.

I believe that motivation can be a learned behavior with the proper conditioning. At the root of all of everything about being a remote worker is motivation. You need to be motivated to succeed at remote work. You need to be motivated to try remote work. You need to be motivated to possibly pursue a new home with a good setup for working remotely. You need to be motivated to learn new ways to enhance your communication with those surrounding you at work.

You

Now that we’ve talked about you, let me say that some or all of these qualities will lend well to remote work. Does that mean that this is true in all situations? Absolutely not. Each situation is unique and what lends well for one situation may not lend well for another or even make sense. If you are motivated to make remote work work for you, you will find a way to make it happen. And this list may not even describe you at all.

Next up we are going to talk about jobs that lend well to remote work.

Remote Work: Placeshift and Stay Highly Collaborative Part 1

The biggest complaint most remote workers have in regards to working on a team? Feeling disconnected. The biggest complaint an office has about remote workers? They forget the remote workers are there and don’t always trust what they are doing. Want to learn how to get past both issues?

Hi, my name is Rob and I have a confession to make. I’m a remote worker four days a week. I’m a placeshift remote worker, and yet I am still highly collaborative with my team. “Placeshifting?” you say. “Highly collaborative?” you say. Over the next series of articles I am going to show you how this can be done.

If you are a business and you have not seriously looked into a technology known as Embodied Social Proxies, you are paying opportunity costs. You are losing money. More on that below. This series is for you so pay attention. I will highlight both business benefits and worker benefits.

If you are a worker and you have considered working from home (or just remotely) but you are not quite sure how you would make it work, this series is for you. Or you are already doing remote work and want to learn how to collaborate better.

Two Types of Remote Work

Timeshift – This is when you perform work at different intervals than the mainstream office may perform the work. Many folks have done this kind of work in one respect or another, even when working a regular full time job. If you ever went home and continued working in the evening, you have done what some might consider timeshift remote work. This series is not geared to this type of remote work.

Placeshift – Placeshifting is when you perform work at the same time as everyone else, but at a different location. This is what most people think of when they hear the term remote workers. If you ever have work from home days, you know what it is like to placeshift. This series is geared to this type of remote work.

The terms placeshifting and timeshifting are borrowed from media industry (television, music, etc) with respect to devices like DVRs. Not quite clear? When you record a TV show and watch it later you are timeshifting the show. Timeshifting dates back to the 1970s with VCRs and Betamax, while placeshifting media is a newer concept made possible by devices like the Slingbox. When you use a Slingbox to watch a show from a device like your phone at the same time the show is playing, you are placeshifting. The difference should be clear when you think of placeshifting as same time, different location and timeshifting as different time, location irrelevant.

This same terminology can be applied to remote work. Although I was hoping to coin the remote work types terminology, Anybots and GigaOm beat me to print with their recent article (How and why robots are placeshifting remote workers). At least this means the terminology is sound.

Bottom Line

Placeshifting remote work is not for everyone and not for every type of business work either. Some jobs have physical requirements or security requirements that negate the ability for remote work. Not every person is able to be productive in a setting outside the office (and the converse is also true). The world is not fair, okay? Get over it. If you are someone who can work by yourself and do so well without being easily distracted (read: there are ways to remove distractions in a work from home situation – I’ll touch on those), then it’s possible you have what it takes to be a remote worker.

Business: We Tried Remote Workers Before, It Didn’t Work

This is the argument I hear the most. The biggest problem with this argument is that it is subjective. Remote work itself is subjective/situational. No two remote workers are going to be alike, no two situations are going to be the same. It’s possible you tried remote work with an individual who was not able to work remotely effectively. It’s highly possible you had an employee who moved away and you wanted to keep them so you allowed them to work remotely. But you may not have set yourself (and the individual) up for success. How much planning and research did you do prior to these remote work situations? How much enabling were you towards your remote worker? Did you attempt to manage your remote worker in the same way as the centrally located folks? Have you even heard of Embodied Social Proxies prior to reading this?

The awareness I am trying to raise with you is that there are situations for businesses to make it work. And you can benefit hugely from remote workers if you do the proper planning, research and understand guidelines for making it work in your situation.

How Do I Benefit as a Business?

Talent Pool

Here’s a hard pill to swallow – you are limited by your talent pool. If you require people to be onsite for work, you are limited by the area in which you do business. I hate to be the one to inform you, but you are not the most awesome place to work. I’m sorry. No matter how awesome you are there is somewhere else that is more awesome and does x better. It’s a losing battle. Get over it already.

In this day and age less and less people will move just to work for you. If you expect the most talented folks in your industry to relocate for you, I have to tell you that 1990 called. I’m sorry to inform you it’s not going to happen in every case. And if it does, it’s borrowed time. Because someone else is going to attract them away.

It’s likely the most talented people in your industry will never work for you if you don’t have a remote option available. There are many reasons, but it boils down to where you expect your talent to live.

Happy Workers Are Superfans (and Productive Workers)

This is so huge I can’t even begin to give it the proper amount of attention. You want your workers to be happy. Tom Preston-Werner, cofounder of GitHub, speaks to this in a presentation called Optimizing For Happiness. Please go there now. The bottom line is that you keep your workers happy, and they are much less likely to leave your organization. Turnover costs are huge to a company. If you are not making your employees happy, they are talking to others about not working for you. They have their ears open to new opportunities. They are likely looking for other jobs as you read this.

If you think you are making your employees happy, I would ask what metric you use for evaluation. I’ll be the first to tell you that you are not doing enough to keep your employees happy. If you give out raises once a year and they are around 3-5% across the board, you might be doing it wrong. Not every employee is created equal, not every employee performs at the same level. Why would you pay them the same? Why would you give them the same raises?

I’m going to make a bold statement here: Your best people outperform your middle of the line folks by ten times. If you are not paying them ten times as much or even five times as much, you might want to re-evaluate how truly happy you are making your employees. If you are not challenging your employees, you are boring them and they will find something more exciting. If you are not doing x you are likely not making your employees happy. You need better metrics into what makes for happy workers.

Facility Costs

Your facility costs are significantly cheaper when it comes to remote workers. A remote worker or semi-remote worker can take up a lot less space than a full time worker. If they come into the office once or twice a week, they will take up some space during that time, but the rest of the week that space could be used by other remote workers when then come into the office. Think of this as space sharing.

Remote workers don’t bring/keep a lot of items in the office. Seriously. Get up and walk around your office. Take a look. Notice how much stuff each worker has surrounding their areas. Notice how much space they take up. Go ask how much it costs for the space of each worker you have in the office per month. If you don’t have this number on hand, you won’t understand what it costs for that worker.

This actually isn’t that hard to calculate if you don’t have it. Just find out the costs of your office space on a monthly basis. Electricity, rent, etc. Now take that number and divide by the number of workers you have on site. This will give you a rough estimate. There are ways to get more accurate estimates, but this is a good start.

For the space of that one onsite worker, you might be able to put 5-10 remote workers in there (if you build and use embodied social proxies which are highly recommended and will be discussed during this series). Imagine that. 5-10 remote workers in that same space. That means for every 10 remote workers you hire, you can only hire one onsite person. Kind of sounds weird to hear it like that, right?

Bigger Staff – More Work In The Pipeline

This is probably the most overlooked opportunity cost when it comes to remote workers. You are limited by the number of folks you have into what you can accomplish. When you open up to remote work, you also open up to the fact that you can take on more work. More work in some terms means more revenue for your business. This is huge.

Final Thoughts For Businesses

Remote work is not without its challenges. I can tell you that the benefits far outweigh the challenges. If you’ve tried remote work in the past and it didn’t work out, don’t let that be a limiter to trying again. If Thomas Edison quit the first time he failed, he may not have been credited with the invention of the light bulb as we know it! Failure is a step on the road to success. Food for thought.

Remote Work Series

  • Next up I’ll talk about what individuals need to be successful remote workers.
  • Building an Embodied Social Proxy, aka, the Remote Portal for a practical cost
  • Possibly other follow ups to come

Chocolatey - Guidance on Packaging Apps with Both an Install and Executable/Zip Option

One of the thoughts I've been considering recently with chocolatey is consistency with packages and naming conventions as chocolatey continues to grow. It's fine to name packages by the app/tool name, that's both intuitive and expected. What I am more interested in is when an application has multiple installation options (ie. an MSI and a ZIP). It can become confusing for people to install these when they don't know what they are getting if they call a package that has both. If you start with one that has a .zip and later they release an MSI (nodejs anyone?), what do you call the package that installs the MSI? Do you keep around the executable? Do you rename the original package in response to the other option? Is there a third option?

One Option

If there is only one option available, you are fine to make the package name the same as the application/tool. This makes it intuitive and reduces confusion. 

Multiple Options

To start putting together guidance on this and alleviate confusion, I see that we would move forward in these cases with three packages. One with no suffix, one with ".install" suffix, and one with ".commandline" suffix.

If you would take a quick look at 7zip (http://chocolatey.org/packages?q=7zip), you will notice there are three packages here. 
  • 7zip is what will ultimately be a virtual package
  • 7zip.install is the package name for a package that uses a native installer (i.e. MSI, exe)
  • 7zip.commandline is the package name for a package that has an executable / downloads & unpacks an archive / etc
7Zip right now is taking a dependency on 7zip.install (which makes it a meta package). When virtual packages (see Virtual Packages below) are ready, that dependency will be removed and the chocolateyinstall.ps1 file will look something like the following (this is not definitive of what it will look like though):
 Install-VirtualPackage '7zip.commandline' '7zip.install'
You will notice I put the ".commandline" ahead of ".install". In the end, I think the behavior of a virtual package should default to a command line version. Why? There are folks that do not have administrative access to their machines. Chocolatey is really nice for them because they can install and use chocolatey without ever needing to assert administrative privileges. Marcel Hoyer (https://twitter.com/pixelplastic) first proposed the idea of being able to use chocolatey without administrative privileges. Him and I took pains to make chocolatey work for these scenarios. This did complicate chocolatey a little bit for the package maker, but in the end I think it is a really good thing. As a person inspecting a package to decide whether to install or not, they can see every point that the package maker mentioned they needed administrative privileges. 
That said, the default will be the one on the leftmost side. You are beholden to the community in justifying why you didn't put the command line version first if you decide not to in the virtual package. But chocolatey won't constrain you on that because you may have a really good reason.

App Now has Multiple Options

When an application/tool moves to where it has multiple options, like an installer it didn't used to have, that's when it is time to break the package out to a virtual (meta for now until virtual is available) and create the other two packages with the correct suffixes as outlined in the guidance above.

Virtual Packages

For those confused about the idea of a virtual package, it allows folks to say I need to take a dependency on a PDFReader. PDFReader becomes a virtual package that does nothing other than point to all of the different pdf readers available. When someone installs the package that has a dependency on PDFReader, chocolatey looks at the virtual options and sees you have adobereader installed (one of the options in the list). So it moves on because you have met the virtual package requirements. If you have foxitreader installed, it moves on. Otherwise it picks the first item in the virtual tree and installs it as the default. More information? https://github.com/chocolatey/chocolatey/issues/7

Virtual Packages vs Meta Packages

A meta package is one that points to other packages. If you think of a package that does nothing more than take on dependencies to other packages, that is a meta package. A virtual package is like a meta package, except it has the concept of optional dependencies.

Ending Thoughts

This seems to be on the surface the best way to provide an intuitive user experience. There may be some things we learn along the way and adjust this as we go. If you are a package owner and you have packages that have both options, you may want to start getting them into this format. I myself have some work to do in this aspect.
Thoughts?

 

Software Release Management - Why You Can’t And Shouldn’t Force People to Use the Latest Version

As software creators we don't get to decide what version of our tools / libraries that people use. If we try to force them, our users will go somewhere else.

Update: What Type of Software This Applies To

This post talks of tools, applications and libraries. Things that end up in the users hands. This does not apply to SaaS or websites. These do not end up in the hands of the users in the same sense.

For those of you who immediately think of Chrome or Firefox, which are applications that end up in the users hands, those apply to this post as well. They have nearly perfected a silent upgrade experience, but if they ever mess up that experience, users can choose to use something else. And I believe there is a way to opt out as well (not easily achieved but possible).

Software Release Management

I write software. Much of it is open source. I have multiple versions of my products out there. Even with newer versions available that fix bugs and bring about new features, I still find people using older versions. Even though I have a better newer version that fixes some of the bugs they are dealing with, they are still using an older version. Think about that for a second. There must be a good reason right? Let’s state this in an official sense.

As a software creator you release software. You put a release out there and people use that release. You delineate different releases by a concept of versioning. People use a particular version of your release. You release newer versions of your software that has fixes and enhancements. You hope users upgrade to the latest release when it is available.

I’ve stated five facts and finished with a hope. If you can accept those as facts, we can move on. If we can’t, then you might want to stop reading now because we are never going to agree. If you are a developer like me, you really want people to always use the latest version of your software, so you might be able to accept the last statement as a fact for you. I really want people to always use the latest release of my software as I have went through the trouble of testing it and making it better.

Now let me change some terms for you. Software release management is really a fancy way of saying package management. A software release could be better termed a package. So to restate, as a software creator, you release packages. You put a package out there and people use that package. You delineate different packages by a concept of versioning. People use a particular version of your package. You release newer versions of your package that has fixes and enhancements. You hope users upgrade to the latest package when it is available.

The Hope Versus The Force

I say “hope they upgrade” because you really can’t control that aspect. You can try. You can delete the older versions. You can refuse to have older versions available. You can tell users that they should and need to upgrade. But you put it out there once and it is now out there forever. People will find a way to get to the particular version they need. Or they will go elsewhere. Users speak with their feet.

I find attempting to force a user to do something is both an exercise in futility and a great way to guarantee that you have less users overall.

So people must want to use a particular version of a product. Let’s examine this a little more. Why on earth would someone use an older version of a product when a newer, better, less buggier version is available?

Why Do Users Use Older Versions?

Users use older versions of our packages and they have great fundamental reasons for doing so:

  • It reduces their risk.
  • It guarantees that users of their library (that has a dependency on your library) have a good experience.
  • It guarantees that the product that they have tested is the same product that gets into the hands of consumers.
  • It guarantee their product builds successfully and the same way each time.

In fixing a product and making it better and less buggier, you may actually be breaking someone’s ability to use a newer version. And you have no guarantee to the user that this version doesn’t have flaws of it’s own. Right? Otherwise there would only be one version that ever had fixes in it. We wouldn’t need to release newer versions with fixes, only enhancements. But we don’t. We fix things we thought worked and we fix things we tested but missed some crazy edge case. This is why we go down this path of release management. This is software development.

So people get a certain version and they use it. Users upgrade to the latest version of software when they are ready, not when the software creator is ready. People depend on certain versions or on a range of versions.  In reality I can't force someone to use the latest version. If I try, they will find the version they need through the powers of the internet or find another way. Accepting that, I can give them a way to see it and help them fall into the pit of success. 

From the User Perspective

Shifting to the perspective of the user, I might use your library in my own software. Being able to build my product, even if it means it is using an older version of your package that has bugs, is worlds more important to me and my users. We'll get to your latest version when we can test that it doesn't break our product. But don't try to force me to upgrade to your latest version or I will find another way. I'm not saying that with your package but in all packages the newer version may be buggier than the current buggy version we are using. We don't know and you can’t guarantee that it doesn’t, even with extensive testing. Testing doesn’t prove the absence of bugs, only the absence of errors that you know. I digress.

It’s an evil that we know versus and evil that we don’t. Or put another way, it's a buggy version we know versus a buggy version we don't.

If it’s a tool, we need to ensure that our usage of your product still meets our expectations. We need to test it even though you did and make sure it still works for our needs and scenarios. Where it doesn’t we need to decide if that means we can shift our expectations and upgrade. But we are not going to blindly upgrade and just use the latest version because the software creator believes that is best.

Can you cover the millions of dollars that we might lose by taking on a newer version of your product? If you can give me that guarantee, as a user I will gladly pass that risk on to you.

Final Thoughts

Whether you agree or not, as software creators we don't get to decide what version of our tools / libraries that people use. We just don’t have that luxury. If we try to our users will go somewhere else. So we make it easy for them to upgrade so they will want to. We make the upgrade experience painless so they will want to. We need to be good stewards.

DropkicK–Deploy Fluently

DropkicK (DK) has been in development for over two years and has been used for production deployments for over a year. Dru Sellers originally posted about DK back in 2009. While DK isn’t yet as super easy to grok as some of the other ChuckNorrisFramework tools and offers little in the idea of conventions, it is still a stellar framework to use for deployments.

DK works well in environments where you know all of the environments you will deploy to ahead of time (although not required due to the ability to pull in JSON settings files and servermaps). It is not for every environment, as DK will need to be able to get to the remote location through UNC (if deployed from a local server everytime it won’t be an issue) except for the database. DK is continually improving, so expect a transition into adding FTP type deployments as well.

I am going to stay somewhat introductory, so you won’t see this post get too detailed into exactly how you can use DK for deployments. That would be best covered by reading the wiki and looking at examples or a series of articles.

Concepts of Kicking Your Code Out with DropkicK

Deployment Step – The simplest concept of execution of deployment. This is a step that is involved with getting something set up during a deployment. This could be copying files or setting a folder permission.

Deployment Task – This is a collection of one or more steps to do in making something happen during the deployment. Say a task is to copy some files. A step in that task might be to clean/clear folders. Another step is to remove read only attributes. The last step in that is to actually copy files/folders. This is nearly synonymous with the concept of deployment steps and often referred to that way, even by the maintainers of DK.

Deployment Role – A role is a collection of tasks that as an atomic unit have set up a particular area of a deployment. Like a database. Or a Web site. A role contains one or more deployment tasks.

Deployment Plan – This is a collection of all roles for making a deployment happen. This is what you write when you sit down to write a dropkick deployment for your code.

Deployment Settings – These are settings you can draw from in any deployment step. A core concept to DK is the idea of environments and is baked into all settings.

Deployment JSON Settings – This is the equivalent of the deployment settings, with the actual values that you want the deployment settings to get at run time. This is separate so that you can make changes in case you need to make changes prior to deployment.

Deployment Server – A deployment role is targeted against one or more servers.

Deployment ServerMaps – This is the physical server or servers that you want to target Deployment Roles to for a particular environment. Each role you want to deploy will need at least one physical location.

Remote Execution – When certain tasks must be run against the server they are targeting, DK will copy over an executable to a known location on that machine, run it through WMI on that particular machine, wait for it to finish, and then bring the execution log back to the main logs. This means you do not need a service installed on the remote machine for installation.

Deployment Logs – DK has a few logs that it puts together during the deployment. The one you see in the console is a summary of what is happening. There is a run log that contains details of everything that is happening. There is also a db log, a security log, and a file change log. These logs can be passed to each party that cares about them after a deployment for auditing sake.

NuGet Install

If you want to get a quick start on seeing a good example of DK, just pull in the dropkick nuget package and it will bring some sample code.

Running DropkicK

DropkicK expects you to tell it where the deployment DLL file is, what environment it is deploying to, what roles it is deploying, and where the deployment settings files are located. It runs in trace mode by default, determining if one can actually execute the deployment plan (has permissions, servers exist, etc).

The syntax for running dropkick is:

dk.exe [command] /environment:ENVIRONMENT_NAME [/ARG_NAME:VALUE] [--SWITCH_NAME]

At a minimum you can run dropkick with dk.exe execute. This will deploy all roles to ‘LOCAL’ environment with ‘Deployment.dll’ (seated next to dk.exe) looking for ‘.\settings\LOCAL.servermaps’ and ‘.\settings\LOCAL.js’

dk.exe execute /deployment:..\deployments\somename.deployment.dll /environment:LOCAL /settings:..\settings /roles:Web,Host

The above should give you an idea of all of the options you can pass to DK for execution.

You can pass a silent switch to DK to allow for completely silent deployments. Although rough at the moment, there is a wiki article for deploying from TeamCity. That can be found here: https://github.com/chucknorris/dropkick/wiki/TeamCityIntegration

Enough Talk - Show Me the Code!

The below code shows an example deployment plan for executing a deployment to Db, Web, and Host roles. It leaves out the Virtual Directory setup, but that can be easily brought in from looking at an example (https://github.com/chucknorris/dropkick/blob/master/product/dropkick.tests/TestObjects/IisTestDeploy.cs).

using System.IO;
using System.Security.Cryptography.X509Certificates;
using dropkick.Configuration.Dsl;
using dropkick.Configuration.Dsl.Files;
using dropkick.Configuration.Dsl.Iis;
using dropkick.Configuration.Dsl.RoundhousE;
using dropkick.Configuration.Dsl.Security;
using dropkick.Configuration.Dsl.WinService;
using dropkick.Wmi;

namespace App.Deployment
{
public class TheDeployment : Deployment<TheDeployment, DeploymentSettings>
{
  public TheDeployment()
  {
      Define(settings =>
      {
          DeploymentStepsFor(Db,
                             s =>
                             {
                                 s.RoundhousE()
                                     .ForEnvironment(settings.Environment)
                                     .OnDatabase(settings.DbName)
                                     .WithScriptsFolder(settings.DbSqlFilesPath)
                                     .WithDatabaseRecoveryMode(settings.DbRecoveryMode)
                                     .WithRestorePath(settings.DbRestorePath)
                                     .WithRepositoryPath("https://github.com/chucknorris/roundhouse.git")
                                     .WithVersionFile("_BuildInfo.xml")
                                     .WithRoundhousEMode(settings.RoundhousEMode);
                             });

          DeploymentStepsFor(Web,
                             s =>
                             {
                                 s.CopyDirectory(@"..\_PublishedWebSites\WebName").To(@"{{WebsitePath}}").DeleteDestinationBeforeDeploying();

                                 s.CopyFile(@"..\environment.files\{{Environment}}\{{Environment}}.web.config").ToDirectory(@"{{WebsitePath}}").RenameTo(@"web.config");

                                 s.Security(securityOptions =>
                                 {
                                     securityOptions.ForPath(settings.WebsitePath, fileSecurityConfig => fileSecurityConfig.GrantRead(settings.WebUserName));
                                     securityOptions.ForPath(Path.Combine(settings.HostServicePath, "logs"), fs => fs.GrantReadWrite(settings.WebUserName));
                                     securityOptions.ForPath(@"~\C$\Windows\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files", fs => fs.GrantReadWrite(settings.WebUserName));
                                     if (Directory.Exists(@"~\C$\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files"))
                                     {
                                         securityOptions.ForPath(@"~\C$\Windows\Microsoft.NET\Framework64\v4.0.30319\Temporary ASP.NET Files", fs => fs.GrantReadWrite(settings.WebUserName));
                                     }

                                     securityOptions.ForCertificate(settings.CertificateThumbprint, c =>
                                     {
                                         c.GrantReadPrivateKey()
                                             .To(settings.WebUserName)
                                             .InStoreLocation(StoreLocation.LocalMachine)
                                             .InStoreName(StoreName.My);
                                     });

                                 });
                             });

                             
          DeploymentStepsFor(Host,
                             s =>
                             {
                                 var serviceName = "ServiceName.{{Environment}}";
                                 s.WinService(serviceName).Stop();

                                 s.CopyDirectory(@"..\_PublishedApplications\ServiceName").To(@"{{HostServicePath}}").DeleteDestinationBeforeDeploying();

                                 s.CopyFile(@"..\environment.files\{{Environment}}\{{Environment}}.servicename.exe.config").ToDirectory(@"{{HostServicePath}}").RenameTo(@"servicename.exe.config");

                                 s.Security(o =>
                                 {
                                     o.ForCertificate(settings.CertificateThumbprint, c =>
                                     {
                                         c.GrantReadPrivateKey()
                                             .To(settings.ServiceUserName)
                                             .InStoreLocation(StoreLocation.LocalMachine)
                                             .InStoreName(StoreName.My);
                                     });
                                     o.LocalPolicy(lp =>
                                     {
                                         lp.LogOnAsService(settings.ServiceUserName);
                                         lp.LogOnAsBatch(settings.ServiceUserName);
                                     });

                                     o.ForPath(settings.HostServicePath, fs => fs.GrantRead(settings.ServiceUserName));
                                     o.ForPath(Path.Combine(settings.HostServicePath,"logs"), fs => fs.GrantReadWrite(settings.ServiceUserName));
                                     o.ForPath(settings.ServiceWorkDirectory, fs => fs.GrantReadWrite(settings.ServiceUserName));
                                     o.ForPath(settings.ServiceTriggerWatchDirectory, fs => fs.GrantReadWrite(settings.ServiceUserName));
                                     o.ForPath(settings.SecureWorkDirectory, fs => 
                                          { 
                                              fs.GrantReadWrite(settings.ServiceUserName);
                                              fs.RemoveInheritance();
                                              fs.Clear().Preserve(settings.ServiceUserName)
                                                  .RemoveAdministratorsGroup()
                                                  .RemoveUsersGroup();
                                          });
                                 });
                                 s.WinService(serviceName).Delete();
                                 s.WinService(serviceName).Create().WithCredentials(settings.ServiceUserName, settings.ServiceUserPassword).WithDisplayName("servicename({{Environment}})").WithServicePath(@"{{HostServicePath}}\servicename.exe").
                                     WithStartMode(settings.ServiceStartMode)
                                     .AddDependency("MSMQ");

                                 if (settings.ServiceStartMode != ServiceStartMode.Disabled && settings.ServiceStartMode != ServiceStartMode.Manual)
                                 {
                                     s.WinService(serviceName).Start();
                                 }
                             });
      });
  }

    //order is important
    public static Role Db { get; set; }
    public static Role Web { get; set; }
    public static Role Host { get; set; }
}
}

You might immediately see how this really sets up an environment. The biggest ideas in DropkicK are that you can specify a complete setup from nothing to having a machine completely set up.

RoundhousE–Intelligent Database Migrations And Versioning

“Because everyone wants to kick their database, but sometimes kicking your database is a good thing!”

Many would not argue that you should version your code, and few would argue against versioning your code in a way that can lead back to a specific point in source control history. However, most people don’t really think of doing the same thing with your database. That’s where RoundhousE (RH) comes in.

I have been working on RH for over two years now and people always wander what it is, why and what sets it apart from other migrators. We set out to make a smart tool for migrations that came somewhat close to Ruby’s ActiveRecord Migrations without going the code migrations route (yet). Hopefully this introduction will help you understand why it is different and whether it’s something that is in line with your needs.

What is RoundhousE?

RoundhousE (http://projectroundhouse.org) is a database migrator that uses plain old SQL Scripts to transition a database from one version to another. RoundhousE currently works with Oracle, SQL Server (2000/2005/2008/Express), Access, MySQL, and soon SQLite and PostgreSQL. It comes in the form of a tool, MSBuild, and an embeddable DLL. While someone is working on a GUI, there is no visual tool at the current time.

RoundhousE - Kick It!

What sets RoundhousE apart from other migrators?

It subscribes to the idea of convention over configuration, which means you can pass the migrator very few configuration options to get it to work (rh.exe /d dbname), but pass as many options as necessary to meet your conventions. Say you don’t like the tables or folder names that RH uses, you can override those to whatever you want.

RH versions the database how you want it versioned. You can supply it with a DLL path for it to pull the file version from. You can give it an XML file and XPath, or you can use the highest script number in the up folder. You can also just use a sequence based (non-global) form of passive versioning. https://github.com/chucknorris/roundhouse/wiki/Versioning

RH believes in low maintenance and keeping good clean history in your source control. This means that you don’t lump everything into one folder, you put your anytime scripts (views/functions/stored procedures/etc) into their own folders and track history as you go. RH is smart enough to only run these if they are new/different from the current existing scripts in the database.

RH has three modes of operation. Normal, DropCreate, and Restore. Notice none of those are Create like you may see in other migrators. If the intent in the end is to have a database ready to go, why would you want to have to make a step to specify that you want to create the database? RH is smart enough to realize that the database doesn’t exist and it creates it (unless you pass a switch explicitly telling it not to). Normal is just the migration as it is. DropCreate is used during development when you want to continually change the same scripts prior to production. Restore is used when you switch to maintenance mode and want to change the same maintenance script. https://github.com/chucknorris/roundhouse/wiki/RoundhousEModes

RH is environment aware, which means you can have environment specific scripts. If you have scripts or permissions scripts that are different for each environment you can give them a special name.  https://github.com/chucknorris/roundhouse/wiki/EnvironmentScripts

RH is an easy to start using on legacy databases. You just take your old DDL/DML scripts and move them into a special folder that RH will only evaluate/run when it is creating a database (say on a new developers machine). You can arrange existing scripts into RH default folders or point RH to the existing folder types. RH splits scripts with the GO batch terminator in them.

RH speeds up your development process. You can use RH with NHibernate to refresh your database without leaving Visual Studio! Entity Framework and FluentMigrator are planned for this feature as well. https://github.com/chucknorris/roundhouse/wiki/Roundhouserefreshdatabasefnh

RH runs on just the .NET framework. This means you don’t need SMO installed like some other migrators require.

While there are probably other features I haven’t mentioned, keep in mind that RH is not a code migrator (yet). If you are looking for a code migrator, there are quite a few good tools out there, including FluentMigrator and Mig#. Entity Framework Code Migrations is really starting to shape up as well (Seriously! Although EF only works for SQL Server).

How do I get RoundhousE?

There are several avenues to get RH. You can use NuGet, Chocolatey, Gems, plain old downloads (still considered official releases), or source (both in git and svn). https://github.com/chucknorris/roundhouse/wiki/Getroundhouse