Home » General QA Stuff
Category Archives: General QA Stuff
I’m not stupid. But sometimes I feel like a complete moron. Usually it is something to do with a tool download that humbles me. I have been using Fitnesse for years and I really thought I knew what I was doing. Luckily I have had really good developer support to do the heavy lifting. So I got a little cocky and signed up to do a couple of presentations on Fitnesse at the local software testing conference. I wanted to demonstrate a simple test using the fixtures in the Fixture Gallery. Sounds pretty simple right – cue the brick wall.
So I scour the web and the Fitnesse.org site for simple easy instructions. I find the fixture gallery zip file, download it, and unzip it. It seems to include a completely new installation of Fitnesse. I’m confused – let’s consult the documentation. Well I would if I could find any – i can’t!
Maybe there are books available? Nope! Well there are, sorta, but I’ll save that for another post.
I’m not giving up. Once I get this solved I’m going to write the Idiot’s Guide to Using Fitnesse.
If anyone can help and walk me through this, please contact me at firstname.lastname@example.org.
I just found out that the company that I was contracting for is looking for 3 contract testers on my old team. I absolutely loved it there, loved the project, loved the company, etc. The open positions are apparently to replace me – the exact same job I spent almost 2 years building – using a test architecture that I built from the ground up. I could be immediately productive. Literally hit the ground running. You’d think I’d be a shoe-in to return. Think again! I just heard that I’m basically not qualified to be me and that I won’t be considered for the open positions. Needless to say, I’m livid. Have you met me? I’m a Golden Test God!…lol. Oh well. No need to dwell on it. It’s their loss! As Jimmy Buffett says – “Breathe In, Breathe Out, Move On”!
For those of you in the Denver area, or wanting to visit our fine city, SQuAD will be holding their next conference in October. Your’s truly will be providing both a one-hour presentation in addition to a 1/2 day work shop on Fitnesse. Check out http://www.squadco.com for more details. If the current info isn’t up yet – keep checking back.
We all are!
I get a kick out of watching the news for “quality” issues – or should I say “lack of quality” issues! There are usually lots of them.
Someone recently told me that there is no difference between Quality Assurance and Quality Control. Seriously?
Sadly, its not the first time. In my not-so-humble opinion – most companies or organizations have absolutely no clue as to the difference between Quality Assurance and Quality Control. Even though Quality Control has been an out-dated practice for more than twenty years, I still find a “quality control mindset” on each consulting assignment. They may even call it Quality Assurance, but it ain’t. Lets look at the two shall we?
Quality Control(QC) is the traditional way of looking at quality. In a QC environment, quality efforts are usually tacked on at the end of a given process. Something is produced and then given to the QC folks to test and determine if the product meets quality standards. In the event a quality issue is detected (and there usually is), the entire product may need to be scrapped and rebuilt. Or, where it is practical, pieces or parts of the product may need rework. Then the whole production/quality process repeated until quality standards are achieved. If time is critical, its the quality process that gets the ax. Ultimatly, only the QC Team is held responsible for quality. A QC mindset is expensive. Had the problems been found earlier in production or even during the design process it would have been significantly less expensive to fix. Ask Toyota.
Quality Assurance(QA) on the other hand involves having an eye to quality at every level of the organization, throughout every process, by everyone involved in designing, producing and even using the product – be it a computer, or the software that runs it. With QA, quality isn’t saved until the end but rather looked at constantly – by everyone, not just testers. Its having what I like to call the “Quality Mindset”. The goal should be to identify quality issues constantly, throughout the process and adjust as neccessary (Controlling function of management) Finding and resolving issues early ultimately saves money! Sure it may take a bit longer to produce the final product and cost some money, but in the long run I think it saves money. Ultimately is produces goodwill and trust with the customer. Unfortunately, to the bean-counters, trust and goodwill are difficult to measure in terms of dollars and cents (or is that sense?).
If you’re Microsoft and control your market, you can slip up on quality (or customer service) and get away with it. You can make your products difficult to install or completely annoy your customers. Since there is no real competition, customers have no where else to turn. So the company can make you lose all of your installed software when you try to upgrade to a more expensive version of THEIR operating system. (Try upgrading from Windows 7 Home to Windows 7 Professional – you can’t. They force you to do a new installation. Not only do you have to reinstall all of your software, you also have to reinstall all of your drivers) Then they can get away with charging a small fortune to help you resolve the problem when you call customer support. What other option do you have? Why make me pay to use the more expensive version of YOUR product then charge me for assistance? Or for that matter – to use your product at all (see Microsoft Project). Because they can!
It is unlikely your company can get away with it. You need a quality product and you need produce it as inexpensively as possible. You probably have to compete in a competitive marketplace.
I like to get involved in any product development early. I know most of the pit falls and I can catch most of them early – hopefully before the first line of code is ever written. Will things slip by? Absolutely? As much as I like to think I’m perfect, I’m the first to admit that I’m not. I want to see all design documents, specifications, requirements, etc. I may find something really trivial – but its a lot cheaper to fix it now by running a spell checker than it is to fix it in the code, re-install it and re-test it.
Watch for the tell-tale signs. Listen for things like “We’ll catch it in test” or “That’s not important right now” or “Its someone else’s job to fix that – they’ll find it later”. You gotta love job security!
Shut up and Test!
Shut up and Test!
Shut up and Test!
Shut up and Test!
Shut up and Test!
My new mantra
Thanks Jimmy Buffett and Mac MacManally!
It’s my job to be different than the rest
And that’s enough reason to go for me
It’s my job to be better than the rest
And that’s a rough break for me
It’s my job to be cleaning up this mess
And that’s enough reason to go for me
It’s my job to be better than the rest
And that makes the day for me
One from the Capn’s Treasure Chest. I’ve just been thinking about this one a lot recently. One of my character traits – or flaws – is that I will usually tell people – sometimes bluntly – what I think. It’s my job. If you are paying me big bucks to look at your stuff – then you deserve some honesty. You may not like the message, but it’s better to hear it from me than your customer. So here goes….one from the archives…
While I’m not a huge fan of American Idol, I’m a big fan of Simon Cowell! Do you think if one of the Idol winners ever gets an award, like a Grammy, they will thank Simon? I doubt it. He’s lucky. He can be open and say what he feels. Sure people are angry at him. But you know what? He’s usually right. It’s just the message delivery that upsets most people. Deep down, they know. Unfortunately, for software testers, that’s not a luxury we have.
I think Simon would be a great software tester. A desirable attribute for any good software tester is to tell people what we think – tactfully (OK this is SO not Simon). Software developers, Project Managers, Product Managers, and anyone else on a software project, spend countless hours designing and developing software applications for customers. They’re proud parents. It’s their baby. Then they show it to us, and essentially ask: ”What do you think?” More often than not we have to tell them they have an ugly baby!
I imagine, if you have an ugly baby, Simon would tell you so!
Nobody wants to tell someone they have an ugly baby, but unfortunately, it’s our job. The key is how you tell them. Sadly, you will always run into a proud parent that will be hurt no matter how you tell them. Different parents will respond differently depending on the way the message is delivered or received.
So how do we handle these delicate situations? Having been a test consultant now for a few years, I’ve had to deal with a number of delicate parents. Here are a few tips on the proper care and feeding of delicate parents:
- Build a Rapport with the Team: Get to know them. Take them to lunch. Buy them bagels or donuts. Let them know that you are there to make them look good. When a virtually flawless application is delivered to a customer, no one says how well tested it was. Development teams will always get the credit. However, if it is delivered with bugs, everyone will wonder who tested it!
- Be Honest and Responsive: One of the best compliments I ever received was: ”If Dave and I grew up together, I’d never let him touch my toys. He breaks everything!” Tell them up front, you’re going to do everything in your power to break their application. It’s what you do! Although a good magician never reveals their secrets, a good tester should. If I have time, I will usually tell them what my attack plan will be.
- Be Open and Available. Want me to take a look at your requirements? – Absolutely! I always let teams know, that if I’m available before formal testing begins, I will give them a free look at their requirements, specifications, code, whatever they have. I won’t create a defect in the bug tracker. I’ll just shoot them a quick email, and make a note to look at it later. It ends up saving everyone time in the long run and, once again, makes them look good when formal testing begins. It also helps me develop and refine my tests.
- Let Them Review Your Tests. If you’re going to look at and critique their stuff, it’s only fair to let them do the same to your stuff.
- Don’t Rely on the Bug Tracker. Never send a public ugly baby notice! The last thing you want to do is rely on the bug tracker to deliver bad news. There is nothing worse, or less productive, than flame wars in the bug tracker.
- Talk to someone! Let them know what you did and why you did it. Show them. Lead them towards a solution. Tell them what your expectations are. It may be a simple misunderstanding of a vague requirement. Count to 10, then write it up.
- Check Your Attitude. This is my personal weakness. You don’t want to come off badly. What’s funny to you, or well meaning, can be completely misconstrued. Be critical, but constructive. You need an air of friendliness and support. If you come off as arrogant or condescending, the ultimate message will be lost.
- Don’t Take it Personally. Tough to do? Absolutely! But you’re just the messenger. Its’ usually not about you. You’re just the closest and easiest target. Grow a thick skin.
- Be Prepared. It’s going to happen. Maybe it hasn’t happened yet, but if you do this long enough, it will. Be ready for it.
- Write an Article. While it may not solve anything, you will feel better afterwards. I know I do!
Unfortunately, while these tips tend to work well with in-house teams, it can be difficult with geographically distributed teams. Since they don’t work with you face-to-face every day, and you can’t take them to lunch or buy them donuts or bagels, there is a tendency for the message to get distorted. As I recently learned (the hard way). Even though you may have good intentions, someone’s feelings may get hurt. Unfortunately, attitudes don’t translate well over the phone or by email. Don’t be afraid to apologize. It’s typically a misunderstanding. It’s more important to heal the relationship and move on than to stand on principle. If you’re right – gloat to yourself.
Bottom line – no matter what you do, you’re going to hurt someone’s feelings. Be prepared for it and don’t be afraid to make adjustments if you need to. It may still be an ugly baby, but it’s never too late to do something about it! Someone has to do it. You don’t want the customer to do it.
If you’re in this business for personal rewards or recognition, you probably need to rethink your career choice. Not that they’re not there. They’re just few and far between. No, a good tester gains satisfaction by knowing, albeit silently, that they were the reason the application was virtually flawless. Sure, no one else may notice, but I know, and that’s good enough for me!
If there is one area that seems to be consistently over looked during the software design process, it’s Error Handling.
A well-designed application will detect and report any error to the user long before it becomes a major issue – like trying to store bad data in the database. Don’t rely on just reporting system error messages, or the dreaded yellow screen in ASP.NET, to let the user know they messed up. In fact – never let developers do that! Good error handling will let the user know, in a friendly manner, that they messed up and will guide them towards an appropriate resolution. Too many times I’ve seen system error messages that went directly to the UI. Not only does it completely, and sometimes unnecessarily, terrify the user, it doesn’t tell them what they did wrong or how to resolve it. OK – it does, but usually in developer-speak. Something like “AccountNumber is NaN”. Huh?
Typically, if we don’t tell the developer how to handle and report errors, they will be left to their own devices. The results: inconsistency – the same type of error is reported differently in different areas of the application. Omission – errors may be missed (until the database crashes). Errors are reported in developer-speak or system messages. I even had an application developed off-shore report an error in Chinese. The text was red so I assumed I had messed up.
I like to focus on errors and error handling early. Hopefully in the design/requirements process. I discreetly remind them that if they ignore me now, they do so at their own peril later on. I make a note and we move on. By the way, when you do find the error, try to resist the “I told you so” comment. Gloat in private.
There are lots of ways to handle and report errors. Some of them work well. Others, not so well. Here are some of the things I typically look for:
- Consistency: The same type of error gets the same message no matter where it occurs in the application. Same error level, text, icon, color, etc. And the error always appears in the same place (top of the page, next to the data field, etc.)
- Obvious: Don’t make me look for it. Make it highly visible. Don’t make me scroll to find it. I hate it when an application tells me there is an error but doesn’t show me where. Or worse, just reloads the page and makes me guess what happened.
- Feedback: If you can, let the user know not only what they did wrong, but how to fix it. Exception: you can be vague on log on pages. You don’t want to guide a potential hacker to the correct solution.
- Colors: Different colors mean different things. Red=danger, yellow=caution, green=good to go, blue =information. Use a color appropriate to the error. Don’t tell me the system is about to crash in blue.
- Icons: Again, like colors, if you use them, use an icon appropriate for the error. Typical icons I’ve seen: stop signs, caution signs, question marks, check marks. If you use color as well, follow the color rules above. If everything worked, don’t show the user a red check mark.
- Avoid flashing things or sounds: Unless you are about to detonate a nuclear missile. Then its OK.
- Watch the text: Don’t be judgmental or long-winded. Quick, short, and to the point.
Once again, my apologies to the Context-Driven folks – some Best Practices. No wait – I’m not apologizing – I stand by my use of “Best Practices”! Here are some things I’ve seen in the past and really like:
- Combine colors with icons. For example, icon, text or background colors fit the error level
- Use Regular Expressions: Or tell developers to use them. A RegEx is a great way to detect errors. Store them in a database table and call them as needed. Helps to insure consistency. There are a bunch of regular expressions already written for them on the Internet. There are also some great RegEx checkers.
- Message Catalog: Create a table in the database to store all messages reported to the user. You can also include the message level: Informational, Caution, Danger, etc. Don’t forget positive messages such as: “Record Updated”, “Mail Sent”, etc. The Message Catalog comes in handy if you ever need to localize your application – trust me!
- Write an Error Handling Guide or Specification: Hopefully before the first line of code is ever written.
Never over-estimate your user. They will mess up ! Be one step ahead of them. Think “Defect Prevention”!
If you keep up with my testing rants and raves you know I have little, if any, respect for most of the “Test Celebrities” currently on the book and lecture circuit. There are a handful that I thoroughly despise. They’re arrogant, conceited, self-centered, and other adjectives as well! I’ve met some of them, not all. I’ve also listened to their presentations – when I could stay awake. Most have a “My Way Or The Highway” mentality. You must do as they say or you will be the subject of public ridicule. I personally experienced it when I dared to question one – politely. How dare we question them? Afterall – they wrote a book! I’m just some no name tester. They ganged up on me like a pack of starving wolves. One in particular was especially nasty. I once witnessed three of them gang up on a conference speaker that they disagreed with. In the middle of the presentation!
But, I’m happy to say there are a small handful of test celebrities worth listening to and worth reading. I like them for three reasons. First, they know what they are talking about and communicate it very well. Second, they are personable and pleasant to chat with. Even when they disagree with you, they are respectful. Lastly, and most importantly, the other group despises them. For me, that alone is reason enough! James Whittaker makes that short list.
If you ever get a chance to listen to James Whittaker – take it! Trust me, not only will you learn something, he is a pleasure to listen to. I recommend you read all of his books but especially his latest one “Exploratory Testing.” As far as I’m concerned its pure genius!
Bottom line – James just gets it!
What is your Test Data Strategy? Do you even have one? Do you even care?
In my less-than-humble opinion, test data can really make or break any test effort. Second only to error handling, test data is rarely something that receives a whole lot of attention. It tends to be one of the more neglected aspects of testing. Personally, I think it is way too important to overlook. As a consultant, whenever I ask clients about their test data strategy I’m usually met with blank stares.
I was once contracted to test an online banking application. One of the key tests was related to transfers between accounts. U.S.l banking regulations apparently limit the number of online transfers between two accounts (like savings to checking or vice versa) to 5 transfers per month (they did while I was testing it anyway). So, at the end of a very long test day, I reached the transfer limit test. I logged on to my test user’s account and made 5 transfers of $100 from the user’s savings account to their checking account. The transfers all succeeded. Then I attempted to make a sixth transfer which was correctly prevented with an appropriate message as to why. So far, so good….I thought. I went home for the day and figured I’d try again the next morning just to be sure it wasn’t a daily limit. The next morning, coffee and bagel in hand (with chives and onion cream cheese), I sat down at my computer to resume testing. I launched the application and attempted to log on to my test user’s account. The log on failed. Surely a typo. So I tried again. It failed a second time. Third time’s the charm. This time I took great care to enter the user name and password correctly. Strike 3 – it failed again. So logically, I asked if any changes had been made to the database. My manager informed me that the user had changed their password because someone had been tampering with their account. What!!! Apparently I hade been using a live account to test and the account holder was quite rightly upset and changed her password. The incident was also reported to the managers of the bank. I was testing with live production data! I couldn’t believe it.
I assumed it was a blinding glimpse of the obvious that you never, never, ever, test with live data! Well you know what happens when you assume. Lesson learned – the hard way.
So now I give a lot of thought to test data. Rule Number 1 – separate test data from production data. But where do you get test data? There are essentially 3 approaches you can take: create it, copy it from production, or a combination of the two. Which one you use will depend on your particular situation, schedules, and database saavy. Let’s take a closer look at each.
Option 1: Create it from scratch. If you are testing a brand spanking new application this may be your only option. Creating your own test data gives you the most flexibility. You can tailor the data to each specific test case. Once it is created you can save a snapshot of it or write scripts to recreate it which will allow you to restore that data to a clean copy at the beginning of each test cycle or as needed. The downside is that it can take a lot of time to build the data. Especially if you need a lot of it. Get to know your DBA. Take them to lunch. Buy them a muffin.
Option 2: Copy the data from production. If you are working on an update to an existing application, you may be able to take a copy from the production database. Even if the database structure is modified from the previous application to the new one, it may still be more efficient to get a copy and modify it than to create it. Of course the existing data may not support testing. For example if you are testing data filtering, the production data may not have all of the filter values available to adequately test the filter. If you are testing numeric sorting where vales may be positive, negative, or zero, some of those values may not exist in the current copy of the data. As a result, the data needed for a specific test may not even available. Another issue – production data is constantly changing. The data used in one test cycle may be different from the copy you take for the next cycle. Records may be added, deleted, or modified between tests. As a result, any defect you find may be a code issue or may be a data issue. Because results can vary from cycle to cycle test results become unreliable. One way around this problem is to take a copy of the production data before the start of the test cycle and save it. Then restore the test database using this saved copy rather than a current copy. The data becomes more consistent and therefore more reliable. Of course if the database structure changes, you may have some work to do. Another downside to using or copying live data – privacy issues. The data may contain sensitive information such as bank account numbers, social security numbers, usernames and passwords, etc. You may need to cleanse the data before you can use it.
Option 3: Combine Options 1 and 2. Start with a copy of production and then modify it or add data to meet the needs of your tests. Write an update script. Once it’s ready – copy it, and save it. At the beginning of each new test cycle you can restore the database using the copy. The benefit – you can have a lot of data and still meet your test needs. Again, do not use a fresh copy of the production data. This is actually my preferred option. Especially if I’m testing any kind of filtering, sorting, or searching. If you do those action with a small data set you may believe these functions are pretty quick performance-wise. However, add a real data load and these functions may slow dramatically or even break.
Regardless of the Test Data Strategy you use, be sure to give your data needs some thought. Work with you DBAs to find the best approach. DBAs are your friends. Since most people usually avoid them, they might enjoy the attention. Lastly, open an account at your local bagel or donut shop.