Posts tagged with 'architecture'
Welcome to the latest installment of the Brief Bio series, where I'm writing up very informal biographies about major figures in the history of computers. Please take a look at the Brief Bio archive, and feel free to leave corrections and omissions in the comments.
John von Neumann
Since Ada Lovelace's death, there's a pretty big lull in notable computer-related activity. World War II is the main catalyst for significant research and discovery, so that's why I'm skipping ahead to figures involved in that period. If you think there's someone worth mentioning that I've skipped over, please do so in the comments.
But now, I'm skipping ahead about a half of a century to 1903, which is when John von Neumann was born. He was born in Budapest, in the Austro-Hungarian Empire to wealthy parents. His father was a banker, and John Von Neumann's precociousness, especially in mathematics can be at least partially attributed this to his father's profession. Von Neumann tore through the educational system, and received a Ph.D. in mathematics when he was 22 years old.
Like Pascal, Leibniz, and Babbage, Von Neumann contributed to a wide variety of knowledge areas outside computing. Some of the high points of Von Neumann's work and contributions include:
He also worked on the Manhattan Project, and thus helped to end World War II in the Pacific. He was present for the very first atomic bomb test. After the war, he went on to work on hydrogen bombs and ICBMs. He applied game theory to nuclear war, and is credited with the strategy of Mutually Assured Destruction.
His work on the hyrdogen bomb is where Von Neumman enteres the picture as a founding father of computing. He introduced the stored-program concept, which would come to be known as the Von Neumann architecture.
Von Neumann became Commissioner of the United States Atomic Energy Program. Here's a video of Von Neumann, while in that role, advocating for more training and education in computing.
In 1955, Von Neumann was diagnosed with some form of cancer, possibly related to his exposure to radiation at the first nuclear test. Von Neumann died in 1957, and is buried in Princeton Cemetary (New Jersey).
I encourage you to read more about him in John Von Neumann: The Scientific Genius who Pionered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More (which looks to be entirely accessible on Google Books).
AOP is best used for cross-cutting concerns, which are often non-functional requirements. But what's the difference between functional requirements and non-functional requirements?
A functional requirement is a requirement about what an application should do. This is manifested in a form that the user can easily observe. Business rules, UI logic, etc.
A non-functional requirement is a requirement about how an application should work. This is stuff that a user typically doesn't see (and probably doesn't care about). Logging, caching, threading, etc.
So here's a quick exercise for you:
Which of these are functional and which are non-functional? Why?
- Addresses should be US-only, ZIP Codes must be five digits
- Every method call should log its own name and arguments to a text file.
- When returning a list of Customers, only Customers the current user is authorized to see should be returned.
- When submitting a new Customer, it must be put in a pending queue for approval by an administrator.
I'll post my answers in a later post, but feel free to leave your answers in a comment.
I came across an abstract and slides (PDF) about using AOP to detect code smells. It got me thinking: is clean code and SOLID architecture itself a cross-cutting concern? Usually "good, maintainable code" isn't ever written down explicitly as a requirement (functional or otherwise); it's just sorta assumed that developers will write the best code they can. Obviously that doesn't always happen, and the customer probably won't know one way or the other until after release.
As a developer, sometimes it's hard for me to be objective when looking at my code and the choices that I've made. Pair programming is one way to help alleviate this: I can get instant feedback from another developer as I'm coding and making decisions. Test driven development also helps, by forcing me to write code that's easy to test (and therefore loosely coupled). Not every project or code base has the luxury of either of these things: maybe there's only 1 developer, or maybe it's a legacy code base. Whatever the reason, another approach to take is code analysis: code metrics like cyclomatic complexity and maintainability index. There's also heuristics, aka "code smells" that (not always, but usually) indicate that there might be a problem.
There are three code smells addressed by the Juliana Padilha's slides, none of which I've heard before:
- Divergent change: this sounds like the opposite of Single Responsibility. I.e. the class has more than one reason to change, and thus its responsibility is diverging.
- Shotgun surgery: I've not heard this term, but I've certainly seen it (and been guilty of it myself). Making a change requires touching a handful of different classes instead of just one or two.
- God class: I actually have heard of this, and if you consider classes with 300+ line Page_Load methods in ASP.NET to be God classes, then I've certainly seen it and done it.
The metrics that she uses to find these smells are not traditional metrics, but "concern-driven" metrics, meant to identify code "scattering" and "tangling" (i.e. the code that AOP is meant to help refactor), and includes:
- Concern Diffusion over Class (CDC)
- Concern Diffusion over Operation (CDO)
- Number Concerns per Class (NCC)
- Concern Diffusion over Lines of Code (CDLOC)
These metrics weren't defined in the slides, but I found them in another white paper from Columbia.
Do you agree with these concepts?
TDD is great, BDD is better. DDD is the way it should be done. SOLID principles should always be followed. Don't repeat yourself. Be agile. Follow the boy scout rule. Use an IoC tool and/or dependency injection. Don't reinvent the wheel. Always normalize your SQL tables. Use AOP to avoid boilerplate and clutter.
I do. And a lot of other people do too. But why? Because:
- ...it's been drilled into your head by peers and at software conferences?
- ...you have baggage from a previous job or hate your current job?
- ...you read about them all in a book like Clean Code, and assumed Uncle Bob knows what he's talking about?
Or have you actually researched and practiced each of these concepts and found them to be superior in many cases to the alternatives (which you've also researched and practiced)?
Well, for me, the answer is: all of the above. Except I don't hate my (current) job.
Yes, I just admitted that I've blindly subscribed to a lot of the tenets that you probably hold dear because of peer pressure and authority figures. This isn't necessarily a bad idea: authority figures and peer pressure can be useful constructs. But independent of your own healthy skepticism and critical thought, they can be dangerous too.
So before you run with any new acronym like BDD or AOP, do your research: see what your peers think, see what authority figures think, and finally use your own brain to put it all together.
One of my favorite acronyms is "RTRJ" for "Right Tool for the Right Job". At worst, it's something of a cliche, and sometimes even an excuse or blanket answer. But I think it's an important thing to keep in mind whenever you are thinking about architecture of an application. If there was a perfect framework or tool for every job, then architecture would be easy because then every developer would always be on the same page.
I came across this article on CodeProject about making architectural decisions and choosing technologies for a .NET project. While you may not agree with every choice made for that project just from reading, I think it's interesting and important to note why each decision was made. For instance, the author was looking at IoC/AOP tools for the project, and considered many popular tools like Spring.NET, LinFu, and PostSharp. Ultimately, the author determined that Spring.NET was the best choice because of its documentation, integration with NHibernate, and the fact that it also comes with dependency injection capabilities. The author also notes the downsides and risks to using Spring.NET.
I think "Right Tool for the Right Job" consists of two main concepts: understanding the job and choosing the right tools. Everyone can probably rattle off their favorite tools, but are all of those tools correct for this project?
- Is the rest of the team more familiar with and comfortable with a different tool?
- Will this project be cheaper or faster to finish with a certain tool?
- What about support: is an older tool with a bigger support community better than a newer tool that doesn't have a big following yet?
- Are the benefits of new tool going to outweigh the uphill challenge of learning curves and convincing others to adopt it?
- Biases: even if you don't admit you have them, you have them. Figure out a good way to make your analysis objective: have others on the team do the spike and get their opinions. Try to play devil's advocate. Find someone to ask you the hard questions and find the answers.
- Is tool X being used just because it's always been used before? There's probably a good reason for that, but don't be afraid to challenge assumptions and the status quo, even if it does end up being quo.
Above all, don't forget: shipping is a feature. You could spend eternity trying to figure out just the perfect combination of tools and architecture, but ultimately the best way to get feedback is to get building! If you find yourself putting a square peg in a round hole during your first sprint (I'm assuming you are using some sort of agile methodology), say so at your retrospective and switch to something else while you still can.