/build/static/layout/Breadcrumb_cap_w.png

Best practices for test environment

We have a K1000 and want to test things in a controlled environment. I'm wondering what others do in the community. It's been suggested here that we get another K1000 for testing but i'm not sure that's the best way to go. My thought is that we just need a test lab of computers to mess around with. I know you can cause issues if you make the wrong change on the K1000. Any best practices tips? Our K1000 admin was cut and we're redistributing work.


0 Comments   [ + ] Show comments

Answers (6)

Posted by: jknox 11 years ago
Red Belt
4

SMal and nheyne both have great suggestions.

You could use a different organization for your test group.  Submit a support ticket if you do not have organizations enabled.  Otherwise, you can purchase a VM license for a test K1000 if you want to have a separate K1000.

Either way, my suggestion would be to use a test group either through separate orgs or labels, such as your IT department, to deploy to initially to test out patches, MIs, etc. 

I'm a big fan of using VMs for this purpose because it's very easy to revert to a snapshot if something goes wrong.  The only caveat to this is that you can't test things like Dell updates on VMs.

Posted by: nheyne 11 years ago
Red Belt
3

We have gotten by using the test lab method so far, if we want to implement anything using the K1000 we use those machines first.  Then if all testing goes smooth, we pick a school (we're a K-12) and deploy to only that one location next.  If everything is still smooth, we consider it ready for production and take it global.  I've never really understood having a second K1000, with the exception of version upgrades.  Other than that, I don't know what the benefit would be.

Posted by: SMal.tmcc 11 years ago
Red Belt
3

We use labels to deploy. We have a couple of each deployed model we test against labeled testlab.  Then if they all go well we deploy to the label "ITdept", and our staff gets the update. Then if that goes well we add one site and wait a day and finally the others sites get added by their labels.

We just created a special label "execption" and applied that to certain machines we do want to kace update, now push to a new label: all machines at a site but machines with execption label.


Comments:
  • How do all of you folks have enough room to have machines for testing just sitting around plugged in? We have atleast 6 different models of pc's around this place not counting laptops. - AFCUjstrick 11 years ago
    • I have two racks with 2 shelves each, so with using the bottom also I can have 12 computers hooked up. Typically because of acad and admin image testing at certain times of t he year I can have 10 in use. I have an 8 port KVM and two of the machines are my masters with there own monitors,KB,M. Laptops are done on a side table when I have some around for testing. - SMal.tmcc 11 years ago
      • That sound was my jaw hitting the floor. We're in the dark ages over here. - AFCUjstrick 11 years ago
    • We rarely have spare physical machines laying around, but I can usually snag one or two and also use VMs. - nheyne 11 years ago
      • Is it posible for VM's to emulate desktop hardware? I know very little about VM's. Just thinking in terms of having a VM emulate a Optiplex 390 or something along those lines. - AFCUjstrick 11 years ago
    • No but as far as anything software-related that you're going to push out, it would be very reliable. For drivers though you'd want the actual machine. - nheyne 11 years ago
Posted by: jegolf 11 years ago
Red Belt
3

I have test VMs where all first initial testing happens to get Managed Installs or Scripts verified as successful. No risk of harm in that environment. Then I have a first test group in the prodcution environment which is a group of workstudy/intern machines where if something blew up it wouldn't necessarily be a 4 alarm fire. Then I leak things out in smaller groups before targeting all machines. If I'm doing an update for an application I've done hundreds of times I may speed things up a bit - which sometimes bites me back.


Comments:
  • But to add to everyone else labels are your friends... - jegolf 11 years ago
Posted by: TankGirl 11 years ago
Senior Yellow Belt
3

Labels and a test group.  We tried the lab setup with one of each type of hardware we use, but that was unreliable in that the machines were not being actively used in the same way the machines "in the wild" were being used.  Evaluating impact on performance, for example, never went well that way and without fail once we got to the higher level managers and their computers slowed down (patching, software deployments) we were basically told to turn it off.  Now we try it on a machine or two (if it's hardware related), then the IT department label gets to be the first round of victims, and then we move on department by department from there to minimize impact on company work flows.  As others have suggested, VMs are perfect for software deployment testing and the labels leverage the controlled environment perfectly for that, and we just have all service desk staff use those VMs during the testing phase.


Comments:
  • Very much agree with your "in the wild" comment. VM's and test machines are great, but most of the time they're going to be treated with the same care that IT desktops are.
    Whereas computers "in the wild" are going to be beatup and bloody. - AFCUjstrick 11 years ago
Posted by: blaise_gregory 11 years ago
Senior Yellow Belt
0

We have a pretty sophisticated setup that is necessitated by the size of our IT environment.  We have a dedicated test environment that consists of a vK1200 and less than 25 physical and virtual machines.  The physical PCs are used for evaluating hardware-dependent components like driver injection during build time, custom inventory rules, etc.  We'll be moving to a fully-virtualized PC environment for software package deployment testing in 2014.  The virtual PCs are snap-shotted so we can quickly roll-back to a pristine state.  Currently, software deployment testing occurs on our physical PCs, which is somewhat tedious to roll-back during iterative testing.  Once validation is completed (e.g., install occurs without error with the desired result, it installs silently for the user, roll-back/uninstall is successful), the package is promoted (export > import) to QA.

We have a mutli-org'ed, physical K1200. Our QA lab (mock-up of a retail location) is placed in a single org so the lab admins and testers can perform integration testing via scripted, test cases against their targeted systems without the possibility of releasing the new software to production systems.  Once QA has certified the package, my team takes back over and export s > imports the package to our production org, consisting of 18K+ end-points.

The software is scheduled for release via script to a live, beta group of PCs.  After a few days of testing, we begin releasing to targets in selected locations via script (so that the software is installed while our retail locations are closed).  Once we feel we've reached a desired level of saturation, we leave the script enabled for break/fix and create a managed installation to catch up machines that may have been offline.  This activity also forces the software package to be installed at build time.

For the most part, this process works for us.

 
This website uses cookies. By continuing to use this site and/or clicking the "Accept" button you are providing consent Quest Software and its affiliates do NOT sell the Personal Data you provide to us either when you register on our websites or when you do business with us. For more information about our Privacy Policy and our data protection efforts, please visit GDPR-HQ