Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Recommended ServiceNow best practice (source wiki) is that Update sets are kept 'fairly small'. This is done so that it's easy to isolate changes to a particular context, and it back out a particular incorrect Customer Update from an otherwise correct Update Set. Furthermore, only items for one Requirement should go into one Update Set. It's okay for one requirement to have multiple Update Sets. We do not want to have one Update Set that affects multiple requirements.

Our own Yale processes process is that we create Update Sets in DEV. We name update sets for the Date they were created, with the name of the Author, with a description that contains the Requirement number, as well as Defect number if applicable, along with a description of the work contained in the update set. 

When an Update Set is In Progress, changes to the platform by that Author are captured in that Update Set. Unit testing is performed in place by the Author, and when the work is completed, the Update Set is set to state Complete.

...

Recall that Update Sets contain one or more Customer Updates. It is possible to merge related Update Sets. The merged Update Set contains all the Customer Updates previously contained in the individual Update Sets that were merged. The previous Update Sets still remain listed, but they contain no Customer Updates after the merge, and they are basically useless, empty containers at that point. Official ServiceNow best practice is that after a Merge, the Update Sets that are now empty should be set to Ignore. There is no such thing as an Undo Merge operation. If a mistake is found after a merge, we would need to create a new Update Set in DEV, fix the issue, and when the issue is known resolved, merge that Update Set with the other Update Set.

Because in the earlier platforms of ServiceNow the Merge feature wasn't always perfect, Yale has not Merged Update Sets in the past. We have heard that Dublin resolves the previous issues, although there is still no Undo Merge feature. Allen from ServiceNow has suggested that Yale consider using Merge, and that we Merge in TEST environment, after testing is complete, before advancing to PRE-PROD and PROD. When Allen gave that advice, he was under the impression that we were performing UAT in the TEST environment, so we should ask the question again after fully clarifying the way in which the meaning of the TEST environment has changedmeaning of the TEST environment has changed since Fruition originally advised us on the purpose and function of our environments.

Advancing code from DEV to higher environments

...

To Back Out, we need to change to the Local Update Sets view, and sort by the field Created, ordered by most recent. 

Only the most-recently-created update set committed Update Set has the Back Out feature. This is a platform limitation. We do not have the ability to Back Out just one particular thing that was applied thirteen or an arbitrary number of update sets ago.

It is possible to chain the Back Out feature, and back out multiple update sets, but doing so gets very tricky very fast. To do so, you most follow a last-in, first-first out process, where to back out the thirteenth-ago update set, you first need to back out the twelve that were applied more recently, than . Then you need to reapply those twelve in the appropriate order. Recall that apply order is essential to NOT reintroducing defects that you have already solved with later update sets. 

In the process of backing out several Update Sets, you are likely to break other requirements and features that are wholly unrelated to the issue you are trying to solve. If you tried to chain Back Outs in Production, you would be taking certain features offline, or reintroducing bugs. If you tried to chain Back Outs in TEST, you disrupt the testing process, and testing can not resume until the disruption is complete. And again, chaining Back Outs is a very delicate operation, and things that get disrupted need to get reapplied in exactly the same order in which they were originally applied, or we may introduce a new defect. In short, trying to chain Back Outs should be an item of absolute last resort, as there is almost aways a less dangerous and less disruptive way to accomplish your goal.

...

The root cause would be isolated, the fix would be generated in a new Update Set in DEV. The fix would be unit tested in DEV, and then advanced to TEST using the process illustrated above (set Update Set Complete in DEV, then in TEST: Update Source, Preview, Resolve Conflicts, Commit). Then tester(s) would review the fix and either mark the defect resolved in HPALM, file a new defect, or say that the fix does not resolve the existing defect. The process may be repeated a second time if the defect still only affects the feature in question and does not have impact on the remaining system.

...

I recommend this approach for R812. R812 has a minor defect, (D1031) caught in UAT. D1031 has a known root cause, with a fix unit-tested in DEV. I recommend we apply the D1031 fix to TEST, and if approved, that R812 and the D1031 fix launch on this release.

...

.

This code is pretty good, but there is a defect so bad that it affects other testing

In this case, a defect has been found that affects multiple modules, and testing cannot continue. The testers should file a defect and immediately notify the development team of the issue. Development team needs to begin Root Cause Analysis. Testing has stopped anyway, and we are less reticent to cause disruption, and more concerned about fully eradicating the issue. There are multiple approaches to solving the problem, and they depend on the results of the root cause analysis.

Root cause identified, and it's one portion of one requirement. We want to keep fixing the requirement until we get things right.

In this situation, there are two steps. Step one is to disable the portion of the requirement that is causing the defect. This would mean that we would also likely remove the feature from that code, but it ends the disruption to testing, and allows testing to resume. If the root cause was a bad client script, we could create an update set in DEV that disables the bad client script, advance that update set to TEST, and have testers confirm that the TEST system is back to normal and testing can resume on other items.

...