Software : Getting it done, may be the right way...
மென்பொருள் காண்பது அறிவு!!!
Wednesday, November 13, 2019
Tuesday, January 10, 2017
Its all about decoupling...
Do you remember the
charger we had for our Nokia 1100? What can you do with that one? You can
charge a Nokia 1100 and may be, a few other Nokia phones.
What does the charger
that come with, say my Moto G4 plus, do now? If I pull out the USB cable, I can
use it to transfer data between my PC and tablet. My wife takes the plug and
uses it to charge her Kindle.
Suddenly a charger that
had the capability to charge one phone, with a simple change, was made into a
multi-purpose gadget.
What made the
difference?
- Division of labor and total
ignorance - The
changer was split into a plug and a USB cable. Plug that would feed the
power to an USB cable, doesn’t care who is at the end of that cable. The
cable that just transmits, doesn’t care if it is power or data. The
parties at both ends decide what to transmit and how to consume.
- Smaller parts interact in well-defined
interfaces - The
plug converts the 220V AC current to what the USB expects, 5V DC and
provides the right slot and pins for the USB to fit. You remove the plug
and replace it with a PC, the PC’s USB port feeds 5V DC to the cable. The
same happens on the other side, be it a mobile phone or a tablet or
kindle. They either get charged or do data transfer.
That's a perfect example of a loosely coupled architecture. It tells how different components in the system has to work, without stepping on each other while enhancing on the functionality. I really would like to see software designed this way. If I am allowed to draw a parallel for software components, I would ask for the following properties.
Ignorant
The unit of software needs to be ignorant of the environment and the context it is working. It will do its one job, consistently and flawlessly. This might seem to be a stupid, but these software units are the workhorses of a bigger software system and it is very sensible for these to be in oblivion. The software components that will be sitting at a higher level of abstraction will use these workhorses based on context. That will be the one job of that higher-level component and it will do that one consistently and flawlessly. So, what will you get by being ignorant:
1. Independent software unit. (Architect loves it)
Adamant
The unit of software should be adamant about how should it communicate. There is only one well defined way to talk with the software component. Violating the contract will end up in an error, generally a specific error that says that the you have violated the contract. I love the statically typed languages specifically for the reason that most of these contract violations can be caught at compile time, but with the rise of dynamic languages, we rely on unit testing. As long as the clients know how to talk with the component and what the component can do, the component can be used in multiple ways.
Challenges in writing such a piece of software...lets delve into that some other time..
Prod - Click Here
Monday, November 30, 2015
Lessons from refactoring #1
We are in the process of moving a monolithic application, into a modular one. So we split it horizontally into layers and vertically into domain specific functionalities. The lowermost layer, as most people call it as data access layer, was responsible for fetching the data from external systems. The layer above it serves as a translation layer that converts the data from external systems to a consumable format (shedding the unwanted info, converting the errors to exceptions, etc) and also do some processing on it(which people generally call as business logic). The User Interface/Presentation layers sit above these. As usual, we didn't have all the time in the world so we kept the code from the monolith the same for the lower layer and wrote the business layer from scratch.After all, its the business layer to which the applications will depend on.
I still think, the thinking was right. But where we missed was how we validate the business layer. It was designed with unit testing in mind.It was beautifully written with interfaces and provisioning for dependency injection and voila,we were proud that we had a loosely coupled code. No No No,, that is not the bad thing, but that loosely coupled code did not do the right thing, that's BAD. That module had a unit test coverage of 99% and still the unit tests did not identify that our beautiful code is not doing the thing that it is supposed to do and that is VERY BAD.
So where did we go wrong?
- We referred(isn't that a nice way of saying copied) the old monolith code base to implement the new modules.
- Despite the fact that we really started referring, very soon the developers started copying the code out of the old application. The monolith was started almost 6 or 7 years ago and since that day, the code was always added and never removed. Which means there was a ton of unused code in there. All these code got into the new module with nice names and a good way of using them. Except that they were never to be used.
- The unit tests were written to test what was coded.
- As I said earlier, half of what we wrote was crap and we had unit tests that tested almost 100% of that crap.
- We believed since unit testing is in place with a 100% coverage, we can relax on reviews and churn more code and tests. Working code over anything else(We're agile!!!).
- None of us reviewed the quality of the unit tests.
So the lessons learnt:
- We should have started with the real requirements in place. We should have never said, "Guys, We need to make this monolith modular, believe me or not, you got a chance to write a module from scratch". What we should have told is, "Guys, Here is what that needs to happen, write a code that does it". If we have done that, all that dead code that was taken in would have been avoided.
- We should also have said, "Wait a min guys, before you go and crank some code, can you write some unit tests around "Here is what that needs to happen". So if even if break my neck today those tests will exactly say what I wanted you to do". Yeah, we could have done Test Driven Development, not exactly in the way uncle Bob says((forgive me uncle Bob), but at least there would have been tests that validate the code for the expected behavior rather than for what is already written.
- If we would have done a review of the unit tests, we would have found out that the unit tests were just bogus.
But its not that late...#3, happened and the result is this post. Me and my team are now on to do #1 and #2. See you soon.
Ironically, I got to see this post today :)
http://blog.ploeh.dk/2015/11/16/code-coverage-is-a-useless-target-measure/
I am with you Mr Mark Seemann
Wednesday, November 4, 2015
Functional Testing Pipeline for web applications
I do not know what the rise of dynamic languages have
bought, but one good thing is that it has thrown a very bright light on the
importance of testing and has created a community of people who are practicing testing as a part of software development. While this is happening on one side, there are still many others who keep debating about do we really need unit test, does integration tests matter more than unit tests, does TDD pay off, what to test and what not to test, is TDD dead, etc...
Me and my team are currently engaged in building an ASP.NET website. This website on its back-end has to talk with multiple services to provide what is needed for the user. Let me try to explain what we have done and if it makes sense, it might be useful for some poor soul who is in the same situation as us.
- Get user input
- Do validation
- Convert the input to a format that is easy to process.
- Process the input, which is generally called the business logic.
- Convert it to a format that can be persisted.
- Persist the data.
- Take the result, and convert it to a format that can be sent back.
- Do some processing on that result.
- Transform it to a view able format.
- Display it to the user.
What should be tested? All of it.
Will one test suffice all? No!!!
How do we design it? Do we write all these as one module? No. We split it across multiple layers and give each layer a cohesive functionality. That exactly applies to tests too. We need different types of tests based on the what area is being tested.
Unit Tests
Unit test should be testing some sort of logic implemented in the program. If there is no logic, we do not need to write unit tests. I have had push from management to get a 100% unit test coverage, but that can be done only for code that has logic. So, in our case we have close to 100% coverage on business layer, 20% on the Presentation Layer and 20% on Data Access Layer. The 20% coverage in Data Access Layer and Presentation Layer attributes to test cases that cover transforming data between layers and error handling. Otherwise, there is nothing much to unit test in these layers.
The unit tests start with a mocked input and applies it on the unit(method or class) with mocked dependencies and checks the result against expected output. These tests validate the code that we have written and does not validate
- Object/Data Format.(We hardcode them, who knows if the actual data from from the other layer might be completely different)
- Dependency on configuration files.
- Dependency on databases.
- Dependency on external systems.
- User interface
- Input/Output channels
Integration Tests
Our integration tests exactly covers the items that the unit tests have missed. We start our integration tests with fabricated , but valid inputs at the Presentation Layer level.In ASP.NET MVC jargon, we hit the Controllers directly with a pre-populated HttpContext and any other data and look for the response. This test exercises all the external systems and also dependencies.There are no mocks, the application talks with the real systems and gives real outputs. Since we are not mocking the external systems, we will end up setting up some data though external means on those systems or during the "Test Initialize" phase of the test.This makes it harder and time consuming(to both code and to run) than the unit tests and hence we do not do an extensive tests. With these Integration tests, we try to cover most of the user scenarios and functionality and achieve around 80% coverage of the code.
What is not tested as a part of the Integration tests?
- User Interface.
- Usability.
- Client side code that runs in the browser.
- Any interceptor code like Web Filters, that are hit before the flow reaches the Controllers.
End to End Tests
These tests are the closest to user behavior,We do not mock data, we do not mock systems. The tests start right from the browser and the results are also verified at the browser. We use Selenium web driver to perform these tests and have lately moved from Chrome to Phantomjs driver. Seems good till now and we're able to run more test cases concurrently with phantomjs.
The unit and the integration tests are run as a part of CI and hence we never have an unstable build hosted, even in test environments. The end to end tests are scheduled to run everyday. With these 3 tests in place we cover almost 80% of the testing and 100% of the functionality.
The 20% is left out for manual testing. It is a web app and it is a human who is going to use the application. The usability and user interface can only be experienced and there is no replacement for manual testing in this area. With some sort of an automation we can figure out that button is in the right place or the text is in the prescribed size. however, only a human can tell whether the right place for that button is convenient and the prescribed size for the text is comfortable.
References
Subscribe to:
Comments (Atom)


