In the case of software projects, every data is a single observation. In order not to compare apples and oranges, the software industry tries to group similar projects for measurement and comparison. But, there is no standard in grouping. I always try to group them by the following:
I wonder, from your experience, if you have any suggestions or any guidelines in this area?
Plotting results from dissimilar projects on a single control chart can certainly lead to erroneous conclusions. It basically comes down to: What is the process? For the case of software development, if you are tracking the time to complete the coding for each change to the software, and you include all changes to the software on a given control chart, then you are likely to detect some of the larger feature changes as special causes relative to other more minor changes. Constructing separate charts for each type of change, using some classification of the size of the change (lines of code?) can be helpful, or you can also take a short-run approach and look at a standardized deviation from the estimated time for each change. The advantage of this latter approach is it removes some of the subjectivity, and the resulting chart then indicates as special causes those changes where your estimates were very different from the actual time. This may be more useful from a process improvement point of view.
Learn more about the Lean Six Sigma principles and tools for process excellence in Six Sigma Demystified (2011, McGraw-Hill) by Paul Keller, in his online Lean Six Sigma DMAIC short course ($249), or his online Green Belt certification course ($499).