Welcome!

Cognitive Computing Authors: William Schmarzo, Elizabeth White, Yeshim Deniz, Pat Romanski, Ray Parker

Related Topics: Adobe Flex

Adobe Flex: Article

Agile Chronicles #2: Code Refactoring

If we could see changes ahead of time, there’d be no need for the Agile process in the first place

This entry is about the joy of coding quickly, finding the balance between getting something done quickly vs. architecting for the future, and dealing with the massive amount of re-factoring that’s entailed in iterative Scrum development.

Coding Quickly

I’m coding like I’m in Flash again. Instead of spending 3 weeks setting up Cairngorm or PureMVC with all your use cases, agreeing on the framework implementation details with coworkers, and getting enough of a foundation together that you can actually compile the application and start seeing screens, you instead make a mad dash to get app working in just a day or less.

Rather than discussing with your team what the best ValueObject structure is and how your service layer should work, you instead get a login service working in under 40 minutes. If something changes massively, such as the data structure of the user object returned, you just modify or delete & rewrite the entire ValueObject. You didn’t spend a lot of time on it anyway, so it’s not like your “architecture masterpiece” is getting deleted; it’s just some scaffolding code to get you up and running.

Coding For the Future

…yet, it’s not scaffolding; it’s real code that needs to work, and work the entire project. Deciding how much to write well & encapsulated vs. just getting it done is extremely challenging, and fun. When do you git-r-done and when do you over-architect? How much and where? Hard questions to answer, fun times. Part foresight, part gambling, all calculated risk taking.

You know your service layer, the code that talks to the back-end probably WON’T change. It’s extremely unlikely that in the middle of your project, you’ll switch from .NET and XML to PHP and AMF. Therefore, spending more time architecting that portion can be done so with confidence in using the extra time it takes.

Anyone from a design agency should already find that very familiar. You have a series of impossible deadlines, and arrogant programmers (like me) exclaiming you must utilize OOP, design patterns, and frameworks. You’re challenged with meeting your deadline(s), trying to do right where you can, and learning throughout the process.

This is slightly different in Agile for product development (or even service development) in that once launched, your application doesn’t have a limited shelf life. It’s an actual product. Traditionally software is used 3 to 5 times its original intended lifespan, although, I’d argue with web software that is lessening. Even before launch, you’ll be extending certain areas, and expecting it to perform solidly. Deciding what to hack together for deadline’s sake, and what to invest well thought out architecture time in is really hard. REALLY hard. And fun!

UAT’s As Checkpoints

During sprint UAT’s (every other Friday for my team), or even just posting the latest working build for the team to see, you’ll inevitably question certain functionality and performance. ”Why is that screen so slow to load?”, “Getting to this screen is more tedious than it should be…”, or “My RAM and CPU usage are through the roof!”. The designer may see their designed creation in action, and totally change their mind on how it should look or work. The stakeholders, after using it, may realize that it totally doesn’t solve their original goal(s) like they thought it would. You may even notice a bunch of positive enhancements to make on already working sections.

This may sound frustrating, but it’s good for a bunch of reasons. First off, this is the main reason Waterfall fails as a process. None of these things can happen until the project is COMPLETE, in the Validation phase where you validate the software is on spec. A lot of you may already have had those things happen during a project; now imagine none happening until the entire product is complete. It’s a lot harder to change that much code that late in the game. You now have the opportunity to fix bad decisions, improve design implementations, and add enhancements… early! This is when they can have the most positive impact, reduce risk, and get battle tested more.

Secondly, when you go to fix something, you can code with more confidence since the functionality has at least been used. Programmers second-guess themselves all the time. They have to; early decisions made incorrectly can have disastrous consequences later (quoted from one of the Pragmatic Programmer authors in an interview). It’s really frustrating to be insecure about how a user story actually works. After getting it “working” in a reasonable timeframe, and using & discussing it, you can have confidence in what you code is more “correct”. Well… almost.

Third, your design gets more real. After banging on the implemented version of the design comps, your designer/UX person can make better decisions if their design is actually working, and the programmers can collaboratively discuss how to change/improve it. This assumes your designer/UX person hasn’t moved onto another project by this point; keeping them on retainer for at least 4 hours a week is helpful for the project.

Fourth, you get confirmation certain problems are in fact real problems. You may think something is slow, but if no one notices but you, does it really matter? Naturally, your ego as a programmer is inclined to fix it anyway, but remember, your goal is get things done, not fix something that isn’t broken. Same goes for problems you know of and other people see; it is just an iron-clad check mark that something is in fact a problem and needs to be addressed. If you have performance problems for full-screen video on your Mac in Safari and Firefox, and so does your project manager in Windows in IE, Firefox, and Safari, then you can confidently infer that the majority of other people will too.

Granted, testing with more than 2 people is preferred, but the point here is that you get a helpful checkpoint with a 2nd set of eyes. Coding this quickly without too much care to architecture, juggling a lot of moving pieces is a lot to handle. Having a helpful team member confirm an issue early is better than finding it months later in QA, even if you knew about it and forgot. Bottom line, using a UAT as an early checkpoint for completed user stories ensures they truly are complete and good and points out problems or potential enhancements early.

Refactoring


The above leads to refactoring; re-writing or modifying existing code. A lot of times refactoring is a pipe dream. Usually you’re so focused on getting things done, having time to make something work better or faster, even the possibility, is the carrot that can keep you going.

Not in Agile. Based on the past 5 weeks and talking to Darrell (my project manager at Enablus), you re-factor on average 30% of your code per Sprint. You’re coding so fast and so furiously, that not everything is encapsulated as much as it could be (except for my service layer, it’s tight baby!). Not only that, but as you see the software in action, you can then start making valid changes. Maybe the functionality didn’t work as good as you originally thought it would, or perhaps you suddenly realize, now that you see it, that it needs something added.

While this is easy from a user story perspective, just modifying an existing user story or adding a new one, it may not be so straightforward in code. A lot of times, there was no way you could foresee the change you are now tasked at making.

If we COULD see those changes ahead of time, there’d be no need for the Agile process in the first place.

This means that some of your code needs to be majorely re-worked, or even just thrown away and done from scratch.

While you’re technically working on a user story, you’re potentially breaking another. It’s not necessarily spaghetti code, but it’s certainly not Orgathoganal by the Pragmatic Programmer’s definition… unless you’ve architected that section out already, you’re a bad ass, or lucky. I’d argue the 30% is a loose average. In the first sprint, I didn’t re-factor anything, nor the 2nd week into Sprint #2. In the 2nd and 3rd sprint, I was re-factoring up to 40%. In Sprint #4, it’ll definitely be at least 40% again. The 40% arose from taking 3 tries to get a piece of functionality the designer wanted correct. The 40% next Sprint accounts for my bitmap caching engine suddenly needing to save not just 1, but types of ValueObjects, and all the existing View’s that now need to support both.

Not to mention the fact we were working with the server-side team for the first time and still figuring things out. The percentages are not indicative of the entire code-base, but rather, my time spent the entire sprint (2 weeks). All this while working on new user stories…

For example, while you originally stated a user story would only be a “2 - mostly easy”, it ended up taking you a total of 5 days to complete because you were re-factoring and fixing other existing user stories that it related to. This can lead to the perception that your original point estimations are inaccurate when in reality, your point estimations are accurate, it’s just there is no adjustments made for re-factoring. This isn’t always necessarily taken into consideration when forming a point average for what your team can complete each sprint. Some sprints, you hit your “20 average” and another, you only hit 15, but you could have possibility re-factored 7 points worth of existing user stories, thus skewing the results.

Refactoring really confirms how much you wish you could predict the future. As I’ve stated before, sometimes it’s easier to just start from scratch again on a certain component now that you know better how it’s supposed to work. The original piece of code may have been really small and not well thought out in the first place for the sake of time. That’s totally fine, as the mere fact you’re deleting it and starting from scratch atests to it being a good decision at the time. Other times, however, you’ll notice you’ll have to do some major changes to a bunch of different classes, of which because not everything is encapsulated, you may suddenly feel like it’s spaghetti code; changing one thing breaks another, totally unrelated piece.

I will say with ActionScript 3, strong-typing and runtime exceptions have really helped me refactor A LOT faster than in the past. I can “break with confidence”, even if I know my code is crap (it isn’t, I’m just going for dramatic effect here… *ahem*). This has really helped remove the “fear” factor you can get with touching code. It’s one thing to have your code build trust with you. You really thought about its architecture, beat on it some, and it held up. Cool, your code has built some trust with you. In coding quickly in Scrum, however, how much trust do you really have when only parts are uber-solid? Knowing that your code is going into a real world product people are paying for doesn’t lessen the pressure and stress.

Again, AS3 has really helped me here. If there is a problem, I’m more likely to find it now, and find it quickly. Additionally, KNOWING that fact allows me to, again, code with more confidence, try more ideas, and end up with better code. Now, you might think you should start coding for every eventuality, at least from assuming errors such as checking for null and isNaN like crazy, but quite the opposite. A lot of the runtime errors can point out problems pretty quickly, and the catch here is they point them out in both quickly written code AND well architected code. The point here is that even well architected code will have problems you don’t forsee. What I end up doing is using my best guess at the time, using foresight based on our past UAT and other project detail discussions, and moving on with life. Stressing too much about one section is a waste of time; if it works, rad, move forward. You may rewrite it again later anyway…

What Doesn’t Change and What Does

Experience has really taught me what to code quickly, what to architect well, and all the in betweens. I haven’t got it all figured out yet, but I DO know of some sections that usually never change, and ones that change all the time.

The ones that never change are the service layer. This is your Business Delegates in Cairngorm, or your Remote Proxies in PureMVC (or if you’re like me, your Business Delegates that your PureMVC proxies call). If they DO change, it’s because the server-side developer changed the the name of the service, or the location. Whoop-pu-dee-doo… 1 line of code in either the class or your ServiceLocator. If you’re delegates/proxies use Factories to actually parse the server’s returned data (XML, JSON, AMF, etc.) then you’re even more insulated. Again, middle tier technology doesn’t really change in the middle of a project.

A data model change usually affects your entire application. For example, if you change the data structure of a Person object (PersonVO), suddenly your Factory changes, your VO’s change, any Controller classes modifying PersonVO’s (such as Commands in Cairngorm or Proxies in PureMVC, and potentially Commands as well), and any View’s that represent or edit them.

If you’re creating complicated View’s, whether based on a design comp with little detail, or it’s not a conventional GUI control, it will definitely change over time once someone uses it and gives feedback. Any View based on a list of dynamic data that needs to draw a bunch of children that represent a ValueObject, such as a repeater or a custom Chart will go through extreme refactoring; both modifications of item renderers and drawing performance improvements if you don’t extend List and do your own drawing routines.

View’s such as your main Application file, an optional MainView, a Login, and Menu’s do not change assuming you use 1 CSS file and straight forward skinning. Most Event and Utility classes just get added to; you don’t really change them, rather you add or remove class properties and/or methods, but their names and package structure stay the same.

For Cairngorm Commands, they just grow in scope as the development age of your application increases. Since PureMVC Commands delegate a lot of this Model modification off to Proxies, those Proxies tend to grow in scope as the complexity of your data interactions increase. They only get waxed or massively changed if your data model does. This doesn’t really happen later in the project.

The above is totally a case by case basis, but has been consistent on a lot of my projects. Your mileage may, and most likely will, vary.

The Con to Refactoring

There are a few cons to soo much refactoring. The first is, some clients don’t understand why you’re coding the same thing twice… or more, especially when Scrum is supposed to be about getting it done quickly vs. over-thinking it. In my experience, if you can speak intelligently at a high level, you can explain each refactoring part away. I can’t, so usually explain it to a project manager who’s capable of translating it in lamens terms to a client.

The second thing is that it makes merging on Merge Day a TON harder. You may have already refactored like twice the week before, and totally forgotten all the details of why you did. Suddenly, 4 days later (every other Wednesday in our case), you’re having an insanely hard time merging code from your branch(es) into trunk. This may require a long conversation with your team members, and you are struggling to remember why you made such massive code changes.

Even if you do remember, the other developer may feel a little frustrated if you didn’t invite them into the code refactoring change discussion for something you may at the time have felt was trivial. It probably was trivial, it’s just blown out of proportion now since merging is always stressful. Either that, or you just spend a few hours getting trunk working again. If I totally wax something, I’ll usually put a large drawn out code comment to explain why. Additionally, I’ll do the same thing in SVN check in comments.

The third thing is it’s a project manager’s nightmare. If she doesn’t have enough forewarning of these and their possible affect on not getting a single or set of user stories done by the end of the sprint, it can be a bad surprise. Communicating these during the daily standup meeting with potential ramifications is best. It can also make planning future sprints challenging as well. If your team has been chugging along at an average 12 points per sprint for 3 consecutive sprints, and suddenly in sprint #4 you spend 60% of your time refactoring, you’re clearly going to finish with a lot less points in user stories completed.

This sets the project manager up for failure. They cannot effectively communicate projected progress to the client, nor visibility into the current progress of the app since something that worked for awhile may suddenly break in the next UAT. You’re supposed to be completing user stories, not creating new ones that break old ones. Again, forewarning is the only thing I know immediately to do. I’m not sure what doing too much refactoring is a symptom of yet. Most so far on my current project, and past ones, have been for random reasons.

Conclusions

I really like how fast I can code some things in Agile. Other things have stayed the same, but the overriding goal of “get it working, but don’t write crap code” is such a high bar… and I love it. It’s the same speed of agency coding, only you know you’ll have to live with the code (aka potentially eating your own mess) so you end up producing better code than you would in an agency setting.

I also like either drawing on experience, or just making challenging inferences, on what to architect well, and on other parts to just get something working without too much thought. It’s nice to have the variety.

Finally, I’m not sure what to think of the refactoring. I like that it’s “ok” and an expected part of the process, but I feel that my project is unique in the amount I’m personally doing. My coworker for example isn’t doing as much as I’m doing at all; he’s chugging along on other user stories and is set to beat me, again, in point values for user stories completed at the end of this sprint. We’re really pushing the limits of Flash Player here, and only one section in this large app is really this challenging; the rest are your run of the mill Flex screens. So, it sounds to me like the “on average 30% of your time is spent refactoring per sprint” still applies. There is no way I’ll be refactoring this much on some of the easier sections in future sprints.

Stay tuned for #3 in the Agile Chronicles series where I talk about every developer using their own Branch in Subversion.

More Stories By Jesse Randall Warden

Jesse R. Warden, a member of the Editorial Board of Web Developer's & Designer's Journal, is a Flex, Flash and Flash Lite consultant for Universal Mind. A professional multimedia developer, he maintains a Website at jessewarden.com where he writes about technical topics that relate to Flash and Flex.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
BnkToTheFuture.com is the largest online investment platform for investing in FinTech, Bitcoin and Blockchain companies. We believe the future of finance looks very different from the past and we aim to invest and provide trading opportunities for qualifying investors that want to build a portfolio in the sector in compliance with international financial regulations.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
Product connectivity goes hand and hand these days with increased use of personal data. New IoT devices are becoming more personalized than ever before. In his session at 22nd Cloud Expo | DXWorld Expo, Nicolas Fierro, CEO of MIMIR Blockchain Solutions, will discuss how in order to protect your data and privacy, IoT applications need to embrace Blockchain technology for a new level of product security never before seen - or needed.
Imagine if you will, a retail floor so densely packed with sensors that they can pick up the movements of insects scurrying across a store aisle. Or a component of a piece of factory equipment so well-instrumented that its digital twin provides resolution down to the micrometer.
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settle...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
We are given a desktop platform with Java 8 or Java 9 installed and seek to find a way to deploy high-performance Java applications that use Java 3D and/or Jogl without having to run an installer. We are subject to the constraint that the applications be signed and deployed so that they can be run in a trusted environment (i.e., outside of the sandbox). Further, we seek to do this in a way that does not depend on bundling a JRE with our applications, as this makes downloads and installations rat...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
Digital Transformation (DX) is not a "one-size-fits all" strategy. Each organization needs to develop its own unique, long-term DX plan. It must do so by realizing that we now live in a data-driven age, and that technologies such as Cloud Computing, Big Data, the IoT, Cognitive Computing, and Blockchain are only tools. In her general session at 21st Cloud Expo, Rebecca Wanta explained how the strategy must focus on DX and include a commitment from top management to create great IT jobs, monitor ...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...