vrijdag 31 juli 2015

Contemplations from 'Common' Events

[This blog was originally published as Dutch article in TestNet Nieuws (http://nieuws.testnet.org/vak/overpeinzingen-uit-alledaagse-dingen/)]

Two weeks ago I experienced a disruption in production, a - especially for me- very serious one. I was able to navigate to a safe point and that was it. Frustrated I called the helpdesk and I started explaining what I was doing up to the moment the disruption occurred, what I did that triggered the disruption and what the impact for me was. While I was telling the story, I noticed that I was thinking about the signals that I had been ignoring up until disruption and all the workarounds that I had been applying and if I had to mention them to the support desk or not. Were they related to this problem or maybe contributed to it or weren't they related at all? Had the problem become worse over time or had my actions made it worse or maybe I had made the problem harder to solve or even unsolvable. I thought that if I had this experience that maybe tons of other users that made incident-reports from the organisation also went through the same thing. What if "my" testers had this problem that were working on a project for a long time? Or any tester for that matter in other organisations?...

My contemplations were interrupted by a voice on the other side of the line: "I'll transfer you to TechSupport". Then it went quiet on the other side. I was thinking then, that I had cursed the dull and corny waiting tunes hundreds of times before, but that I was now doubting if I was still connected now it was not there. I wondered if that was the case with the requests of users too. They tend to throw things over the wall all the time to the IT department, even worse now people are 'scrumming' and it is almost immediately realised.... we now have features in the software that people wanted really badly, now they are there, those features have exposed even worse problems or have now created a situation that users aren't serviced in a need. The silence on the other side of the line is deafening, but the clock on my phone indicating the connection time is still ticking, so apparently I still have a connection. 

I'm hesitating if I should call again and just before the 'moment suprême', a voice sounds on the other side of the line. I start explaining the disruption again from beginning to end and decide to mention I had my doubts a bit longer and that I have been ignoring signals and using workarounds. While I'm telling this, I hear the guy on the other side typing franticly and I realise that I have seen adjustments of the 'historyfield' or 'descriptionfield' itself on several occasions after the initial administration of a bug and I smirk a bit that this principle is not only applicable for 'us testers'. The tester's conscience in action. 


I'm restarting and it all seems to go into the right direction. I'm still getting a message, but I'm helped for now. I'm getting along nicely when suddenly the whole thing stops abruptly, nothing is reacting as it should. I'm calling the supportdesk again, telling the story, forwarded to TechSupport and now also the physical support is on it's way. They are looking, even using a special diagnostic device, a conclusion is made and I'm presented with a description for the solution. 
 
I'm now at the party that is going to solve my disruption. I hear myself, now I have the solution, skipping the problem history all together and I hear myself stating "that's the solution, you fix it". I have a diagnostic report after all and I now exactly what the cause of the problem is. I'm flabbergasted when I'm called a few hours later to hear that an investigation has been done, that the cause is found and that they are going to fix it; exactly as is stated by myself earlier. I question myself if "my" testers have this same knack and are doing the whole diagnostics again when they get work transferred from another tester or do they trust the work of the tester before them? Are developers asking the new tester in their project to do all the already done test work again to make a new diagnosis?

I get a heart-attack when I hear the guy on the other side of the line mentioning the amount that is to be paid for the solution. I'm quiet for a bit. I have myself also done some investigation 'on the internet' on the different possibilities to fix the problem and I have seen (exactly the same) solution that cost a fraction of the amount that this guy is presenting. The only thing is that I have to get my hands dirty myself. In an impulsive moment I flap out that I'm going (thus) fix the problem myself. 

There's silence on the other side of the line (no, I'm not expecting a waiting music this time) and then the voice says that I'm still to pay for the diagnostic fee. Clearly annoyed now, I'm stating that I will not pay for this fee, since I didn't ask for it. Even more so: I already had the solution in a report presented to them, did I calculate my diagnostic fee to them? Again my thoughts were wandering off to my working situation; isn't this exactly what we are doing as testers? Doing the work of our predecessor over again because we want our own view on the problem or we don't trust the data of the one that tested before us and then calculate the costs to our clients (time, money, etcetera...). I mumble something about 'service' being a virtue and I end the phone call after some grumbling and discussion.

In the aftermath my thoughts go to the situation at work and that many disruptions, issue and bugs are raised to easily by users because they have no idea of the costs that a solution costs, especially since it's not their own money they spend. I wonder if, even if the problems are a bit more complicated by nature, if people are rewarded for it they would solve it themselves. Because solving things themselves would be cheaper that letting it be solved by the (more) 'expensive' IT department. Would one be solving problems more quickly and not spending time on implementing workarounds that might worsen the problem of make it unsolvable? What would that mean for 'us testers'? Should we trust the 'results from the past'...

And now? For a fraction of the costs I have fixed the problem myself. What? A tester isn't supposed to fix a program? Says who? Is that relevant at all?

Oh? Didn't I mention that this wasn't an IT-problem? No... I had car trouble. 
It broke down on the highway, while I was under way to a hike on a nice, bit chilly Sunday afternoon. I had the ANWB (Dutch breakdown service) on the phone. First the regular helpdesk, than the technical support. The tech guy said I could drive on with the problem after restarting the car, but when the problem worsened, the ANWB-van with a mechanic came by. 
The cause of the problem was a broken ABS-ring (just Google) and it was repairable by a few easy steps. At the dealer they asked more than twentyfold (!!!) of the amount because they couldn't order the ABS-ring part on it's own but only with the whole axle. In the end I did the repair myself and I'm driving again to there and back again. I also got the invoice of the 'service'... 50 euros for plugging in a device into the car that the guy from the ANWB already did when on the side of the highway. 

And so... the last lesson of this article is... only in their context things really get clear.


Pictures: My own repair attempt and Smart HobbyRepair day in Heemskerk (where I got some helping hands), own archive and Ricardo Vierwind

dinsdag 10 maart 2015

Let's blog about...Let's Test BeNeLux

Once a regular time to start the day... now a unholy moment to get up. I got on the bus at 05:42, the chauffeur hadn't even bothered to turn on the lights yet. Was easy on the eyes though. Traveling by train was quite fine today, unlike yesterday when I had to arrange a car on last notice because of 'actions by NS personnel'. Approximately 08:30 I stepped into 'Mezz'  for the Let's Test BeNeLux, great venue when your tagline is 'For those about to Rock', since it's a smaller (music)stage/ rockvenue.  At registration already some familiar and also loads of unfamiliar faces for me. Always easy to have the longest name on the registration list; easy and fast find :-)


After some coffee I ran off to mainstage where James M. Bach was scheduled for the opening keynote about 'checking versus testing'. In style the keynote starts with some rock music by AC/DC and James plays te part with a striking pose :-). Interactiveness is encouraged and the 2Dcode is shown to download the deck on-site (saves notetaking) so I have an easy job only to have to write down the keywords and scribble my doodles down. 
My interpretations of this keynote is that checking seems to be the fetish of people like managers, who don't understand that testing is more than automaticly running stuff but and that checking is part of testing. Testing being ' evaluation by learning through experimentation and exploration including questioning, modeling, observation, inference, etc. It's like morphine; something that's for professionals for use for a specific use, but not to be given to children.
When we look into testing there are four quadrants, consisting of spontaneous testing and checking and deliberative testing and checking, all activities no matter in which quadrant they are, are useful but it takes people who understand the matter to really make it valuable. The key is 'making sense' , which is the part that can't be automated (probably also the reason why 'sensemaking' has 'sense'  or 'sentient'  in it ;-))
As I see it, checking is something that can be defined and when you have difficulty defining it into a specific criterium, you'll probably have something before you that is in the category of sentience and non-checkable testing. Checking is something that is derived from algoritms. 
In the QA I asked a question that referred to something that James called epistemic testability, which was explained as the things we already know. Together with the mention of the 'history oracle' (the things we see/find we already know), I wondered how to cope with the things we think we know. 
As I interpreted James' answer this is the core of testing and he referred to the story of the 'Silver Bridge', which had a problem in it since the beginning but only after 40 years the problem emerged. He also mentioned having dinner; what are the acceptance criteria there, how are you going to define when you are done up front? It's all about discussion and conversation, but also having an attitude of acceptance; acceptance that problems can and will be in the things we test. With this knowledge and mind-bender, I went for the coffee break. 


After the coffee break James Lindsay had a very energetic note about 'A nest of test'. First time I had to take out my laptop in a non-testlab room and test during the track!! How cool is that. Check out the IP: 52.16.45.184.
for some interesting teststuff. I really had a good time puzzeling around and figuring out what would cause the things I encountered. It was cool to test with a room full of people and having people hypothetising about the things seen on the screen when changing the parameters. I felt like this is what 'Let's Test'  is all about; learning and especially doing together.  Sorry for being so short in this part, but being very busy with tools, reduced the amount of time of being able to blog...


.... The continued...

What a fabulous lunch! Good food and a very sunny terrace outside with testing colleagues. It was almost too difficult to drag my ass into the venue again.

 
But I got myself up to listen to Jean-Paul van Varwijk about the challenges of implementing context driven testing (at Rabobank international). 
Jean-Paul told about some Dutch context (the Dutch apparently have loads of publications about testing compared to other countries) and the steps that lead to the implementation of context driven testing. Rabobank, also because of the crisis and the wish to become more agile, changed to an organisation with 'Domain based delivery teams'. 
It's surprising to hear about 'thought leadership'  in this particular case, since lot's of times I have heard about the term thought leadership being perceived as a nonsense thing, since you can't give leadership to thoughts. My opinion around that was that it was that this thought leader is someone who knows his (or her!!!) stuff and guides people to investigate new things and to learn, educate and stimulate development; it was mostly honed away. Understand my surprise that the thought leader is described in this presentation as such! 
Jean-Paul tells about the uncertainty about not having guidance and direction, he tells about being a bit down about the situation of not knowing where the organisation is heading, but is recently more enthusiastic because direction is more outspoken and he's even motivated to organise workshops again. I found this last part of this track the most valuable, since it (again) points out - to me- that having the organisation or management pointing into a direction or to have leadership, especially in turbulent times or change programs/ organisational changes (and implementations) is essential to keep your people motivated and stimulated and to keep reminding them that they are invaluable to the organisation, even during these times of turmoil.



After Jean-Paul, Joep Schuurkes took the stage to do a track called 'Helping the new tester to get a running start'. He made the analogy with learing to navigate a city to make a point that the 'usual suspects' as plain documentation, map, route descriptions, etc., won't make a newby in the company a happy starter.  He has lot's of images of his home town of Rotterdam to explain the different aspects of introducing the employee in the company. For instance, when showing a picture of Rotterdam right after WOII (flat), he explains that a historic view might not be that interesting for your new team member, since they have to work on the now and future development, but then again we (IT in general) are too history unaware and an overview is important to know how you got there where you are. Slide by slide he ads and ads to the package, only to tell us that we need to become more abstract and have a more guideline like approach with the next key areas: provide structure, model the application (SANFRANCISCODEPOT-heuristic), model your approach to testing (mind the overhead hazard), guide interactions with the application and with the team, empower the new tester (mastery, autonomy, purpose) and the least; have fun! 


I hoped to warm up in the sun during the afternoon break, the conference room being a fridge. But I ended up having a great conversation about conferences and German literature being an inspiration for a workshop about reporting (looking forward seeing it at one of the future conferences!).


Back to the stage in the fridge again. Andreas Faes starts his track, titled "Testing test automation model", with telling a story of the whale, experiencing different things in the "emptiness" of space and defining those things to create it's model to understand these. Loving the story about counting; 1,2,3,4,5,6,7,8,9,10,11,12,13, €... Euro being a number in the model of his son who has not grasped the concept of currency yet. By assimilation this model is correct in his sons mind, but who understands currency knows € isn't a number of course. About understanding models and verifying them...:-). Making a bridge to models in test automation, Andreas explains his path to the now, on the way explaining some historic concepts on the way and adressing what a implicit and explicit model is, but specifically how to get from an implicit (test) to a explicit (automated) model. The idea of what is mentioned here, domain specific language, sounds familiar to me and I can't help but think about 'Kenniskunde'  (sorry for the international guys; it's a concept by Sjir Nijssen on use of proper Dutch language and mathematics and logic in the daily use) or 'Kennis Representatie Zinnen' (google translates this to knowledge representation sentences, but I wonder if this the same meaning), seems -like the article- a Dutch principle, but I'm sure there's a non-Dutch version as well. It triggers me to look into this matter more and it dissapoints me a bit that the track suddenly is over. It feels it's ended very abruptly and would have loved to have heard more about this, but I guess the fact that I am triggered is also valuable, so I have to be satisfied for now.

Instead of Jacky Franken, Pascal Dufour now takes the stage. Which I find a bit too bad, since I skipped Jacky's track in an earlier conference knowing I would see it here. The topic of Pascal is very relevant for me, so it makes up for the loss. 'Automation in DevOps and Continuous delivery' it is called. From continuous integration, to continuous delivery to continuous deployment. Continuous seems to me to ensure a constant, fast feedback loop to development, team or customer, dependent on what type of 'continuous...' is used. DevOps is then explained, because as I understand, to be truly agile in development, whether this is XP or SCRUM, development and operations should be 'on eachothers' lap'  sort of speak; hence DevOps. I got confused during the track about DevOps, as it seemed as a line of tools to be able to push through a development lifecycle, but checking Wiki set me on track again. Getting back into the track again an example is shown of a check in cucumber and a summary about what is possible and to be done. And then suddenly the presentation is over and slides over to a discussion. Keeps me wondering about whether continuous integration, continuous delivery and continuous deployment also needs or implies continuous testing?....or is only checking then possible?...

After the testlabrats James Lyndsay and Bart Knaack had finished the testlab report and Huib Schoots closed the official part of the day, the crowd went to the bar or the hotdog stand by 'dokter Worst' outside, enjoying a hotdog, some fries and beer (or wine, or sodadrink etc.) and some after conference conversations. I called it they day when I had just finished my hotdog and (after all it IS almost an summer day) a glas of rosé. 
I had an excellent day with good tracks, talks and I learned a lot. I think this Tasting Let's Test or this year called 'Let's Test BeNeLux' is a nice oppurtunity for those can't afford the 17000 (ex 25% VAT!!) Swedish croner to attend the full edition.  Hope to attend again next year.