Feeds:
Posts
Comments

Are you responsible for managing a large collection of resources in AWS? Resources that other people often change without you even knowing about it. You have got a list of rules that your infrastructure has to comply with; perhaps HIPAA. With all those change going on it is almost impossible for your team to keep track of their compliant state on their own. Your brain feels like it is about to explore.

Generally there are three parts of problem we hope to solve with whatever tool we pick.

  • First, we need to know at any given time if our infrastructure is compliant with whatever rule is relevant for our organization.
  • Next part of the problem is resolution. That is bringing your overall system back to a compliant state as soon as possible. Especially for compliant rules that deal with security, the timeliness of this action is essential.
  • Finally after we fix the issue, the next part of the problem comes in. Determining at what point and through what actions your system became non-compliant.

So what tools are there for us to solve these problems? Inventory and compliance management space is pretty mature at this point. It is a problem that’s been there for a very long time. So there are quite a variety of tools you can use to help solve this problem. I would like to break them down in three categories:

  • Enterprise-focused tool.
    • Example: Solarwinds, Spiceworks, Microsoft SCCm.
    • All-in-one solution.
    • Expensive.
    • OS-Specific.
    • Good for large organizations.
    • Not so goof for small organizations.
  • DevOps-focused tool.
    • Example: Puppet, Chef, Ansible.
    • OS-agnostic.
    • Not all-in-one.
    • Integrating can be difficult.
    • Often open-source.
    • Highly automatable.
    • Benefits beyond inventory/compliance.
  • Fully-integrated tool.
    • All-in-one solution.
    • Small or large organization.
    • Highly Automatable.
    • Trusted vendor.
    • Pay for what you use.
    • Made for the cloud.
    • Scale near infinitely and handle familiar resources with no changes required in your part.

Any of the three categories is better than no inventory tool at all. But in my option the third category is the best bank for your buck. And one of the best fully integrated tool is: AWS Config.

I will elaborate AWS Config in our next post. So stay tuned and watch out for it.

To use sonar for code analysis you need to have some perquisites installed. We will go through step by step installation of all of the required software.

Note that the versions of software we have used are not necessary, you may use the latest versions but they need to compatible with each other.

Java Development Kit (JDK) 1.8.0_91 -32/64 bit

Download and install the JDK 1.8.0_91, if you don’t have it installed.

Install JDK, by default it will be installed in Program Files of your OS drive, which you can change.

After installation of JDK you will find a folder named Java in your Program Files folder. Note down this path as you will need it later: C:\Program Files\Java\jdk1.8.0_91\

Installing Sonar

Download Sonar 6.5. You can use this link to download it

http://www.sonarqube.org/downloads/

It may be downloaded in a zip file. Unzip the file and place the folder in any drive of your choice.

We have placed it in: D:\Projects\TestProject\Sonar\

Download Sonar Runner

You also need to download Sonar Runner 2.4. It can be downloaded from the same link. Unzip the downloaded file and place it in any directory of your choice. We have placed in the same directory where we placed the sonar i.e. D:\Projects\TestProject\Sonar\ as shown above

Installing C# Plugins

You need the CSharp Plugins Ecosystem 5.10.1.1411 plugin which you may download from the link below:

http://docs.sonarqube.org/display/PLUG/C%23+Plugin

Unzip the file (if it is zipped) and copy the file sonar-csharp-plugin-3.2.jar to plugins directory of Sonar.

In our case it is to be copied in in here: D:\Projects\TestProject\Sonar\sonarqube-6.4\extensions\plugins\sonar-csharp-plugin-5.10.1.1411.jar

Configuring Sonar

Before starting sonar server we will need to do certain configuration changes:

  • Go to conf folder of sonar. In our case it will be found at: D:\Projects\ TestProject \Sonar\sonarqube-6.4\conf\. Here you will find the sonar.properties file.
  • Open file in Notepad or in some other editor.
  • Try to find “sonar.jdbc.username” within the file (without quotes).

Uncomment these lines (by deleting the # before the line), if they are commented (having # sign in the beginning) so that the file contains the below contents.

Configuring Sonar-Runner

Now it is time to configure Sonar-Runner:

  • In our case it could be found at: D:\Projects\ TestProject \Sonar\ sonar-runner-2.4\conf\.
  • Open sonar-runner.properties file in Notepad or in some other editor.
  • Try to find this text “sonar.sourceEncoding=UTF-8”. If it is commented un-comment it (by deleting the prefix # sign) so that the file contains the below contents.

Setting the Environment Variables

To set up the environment variables, you need to go to System Settings:

  • Press window key+brek button to open System settings or Go to Start->Control Panel->System.
  • Click on Advance system settings link. It will open a dialog.
  • Click on Environment Variables button. It will open another dialog box.
  • If you find variable named JAVA_HOME in the System variables, select it and click Edit Button. Otherwise click New button.
  • Fill the following in the dialog box:
    • Variable name: JAVA_HOME
    • Variable value: C:\Program Files\Java\jdk1.8.0_91
  • Note: Remember the variable value is same path where the JDK has been installed as mentioned earlier.
  • Click Ok.
  • Use the same steps to create another System variable name SONAR_RUNNER_HOME.
  • Fill the following for the variable SONAR_RUNNER_HOME.
    • Variable name: SONAR_RUNNER_HOME
    • Variable value: D:\Projects\TestProject\Sonar\
  • Click OK.
  • Finally try to find Path system variable in the list. If it is not there create it.
  • However if it already exists, select it and click Edit button.
  • Do not change the existing value of the variable, just add the following at the end of the value: ;%JAVA_HOME%\bin;%SONAR_RUNNER_HOME%\bin
  • If you have put sonar server at different path you will have to change the underline text accordingly.
  • Click OK and we are done with defining the Environment Variables.

Running Sonar Server

You have everything set to get the sonar server running. To run the sonar server do the following steps:

  • Go to bin directory of your sonar server. In our case it is D:\Projects\TestProject\Sonar\sonarqube-6.4\bin\
  • Where you will see different directories for different OS. Since we have installed 64 bit JDK, we will go to windows-x86-64 directory.
  • Right click on StartSonar.bat file and choose Run as administrator to open it. You may get a popup, click on Run button to open the file.
  • It will open a command prompt. Wait for some time until the sonar server is started as shown below:ConfigureSonarQubeWithDotNetProject.PNG
  • Now you have successfully started the sonar server.
  • Note: Depending upon the sonar version, you may get different message at the end in the command prompt.

Log on to sonar server

After the sonar server has started you can log on to sonar server by visiting the url: http://localhost:9000/ and you will see the below page:

ConfigureSonarQubeWithDotNetProject1.PNG

Click on the log in link and enter the following credentials:

Login: admin

Password: admin

Note: Both username and password are case sensitive.

After you are logged in as an administrator you will see the dash board. This is sonar dash board where see the list of projects, Menus etc.

C# Plugins

Click on the Settings link the top right corner. From the menu on the left click on Update Center (in newer versions you may find it under system tab), Click on the Installed Plugins tab. You should be seeing C# [csharp] 3.2 under Installed plugins. You can also install other plugins related to c# from Available Plugins tab.

Creating Project Properties file

Open notepad or some other editor to create project properties file. Create a new file. Where you will write the following project properties:

  • Project key which must be unique for all the projects. It is defined by sonar.projectkey as shown: sonar.projectKey=Tutorial:SonarCsharp.
  • Project Version It is defined by sonar.projectVersion as show: sonar.projectVersion=1.
  • Project Name this is the name of project that will appear in the sonar project list. It could be different from your solution name: sonar.projectName=My C# Project for Sonar Analysis

Sonar Related Settings

Here you are going to specify the properties required by sonar to work for .Net projects. As shown below:

sonar.sources=.

sonar.language=cs

C# Setting

Here you will be defining the properties for C# by specifying where the c# plugins are installed and the location of project.

  • Project location project location is specified by sonar.dotnet.visualstudio.solution.file property. It value is the actual name of solution file along with extension, as shown below. sonar.dotnet.visualstudio.solution.file=MySonarProject.sln
  • .NET Installation Directory here you have to specify the installation directory of .NET, as shown belo sonar.dotnet.4.0.sdk.directory=C:/WIndows/Microsoft.NET/Framework/v4.0.30319
  • .NET Version specify the .Net version as below sonar.dotnet.version=4.0
  • Test Project if you have created any unit test project in your project, you can specify it with the following property sonar.donet.visualstudio.testProjectPattern=*UnitTests*;*test*;*Test*

Saving Project Properties File

Save the above file in the same directory where your project solution file (with extension .sln) exists with the name sonar-project.properties.

Note: the file must not have any other extension than .properties like txt etc. Extension should be .properties only.

Analyzing .Net project with Sonar

Now finally we will analyze our .NET project with SonarQube.

To run Sonar Analysis on .NET project do the following steps:

  • Start the sonar server as explained earlier.
  • Run command prompt as Adminstrator.
  • Reach to the directory where you have kept the sonar-project.properties file.(where .sln file exists) with the following command cd [path of properties file].
  • When you reach to your solution directory enter the following command sonar-runner.
  • Note: The sonar server must be running when you enter this command.

Wait for some time until the analysis is run on the project and you get the success message as shown below:

ConfigureSonarQubeWithDotNetProject2.PNG

Getting Sonar Analysis Reports

The sonar analysis has been done and we can see the analysis reports on the sonar server.

To see the analysis server just refresh the page (Dash Board page) on http://localhost:9000/ and log in if you are not logged in already.

And you will be seeing your project in projects section. To see the detailed Analysis Report of you .Net Project just click on the name of your project, displayed under Projects section and you will be able to see the reports.

Software Estimates

“We will ask for estimates and then treat them as deadlines.”

Estimates are typically a necessary evil in software development. Unfortunately, people tend to assume that writing new software is like building a house or fixing a car and that as such the contractor or mechanic involved should be perfectly capable of providing a reliable estimate for the work to be done in advance of the customer approving the work. With custom software, however, a great deal of the system is being built from scratch, and usually how it’s put together, how it ultimately works, and what exactly it’s supposed to do when it’s done are all moving targets. It’s hard to know when you’ll finish when usually the path you’ll take and the destination are both unknown at the start of the journey.

I realize that estimates are a hard problem in custom software development, and I am certainly not claiming to be the best at producing accurate estimates. However, I have identified certain aspects of estimates that I believe to be universally (or nearly) true.

Estimates are Waste

Time spent on estimates is time that isn’t spent delivering value. It’s a zero-sum game when it comes to how much time developers have to get work done – worse if estimates are being requested urgently and interrupting developers who would otherwise be “in the zone” getting things done. If your average developer is spending 2-4 hours per 40-hour week on estimates, that’s a 5-10% loss in productivity, assuming they were otherwise able to be productive the entire time.

A few years ago, a Microsoft department was able to increase team productivity by over 150% without any new resources or changes to how the team performed software engineering tasks (design, coding, testing, etc.). The primary change was in when and how tasks were estimated. Ironically, much of this estimating was at the request of management, who, seeking greater transparency and hoping for insight into how the team’s productivity could be improved, put in place policies that required frequent and timely estimates (new requests needed to be estimated within 48 hours). Even though these estimates were only ROMs (Rough Orders of Magnitude), the effort they required and the interruptions they created destroyed the team’s overall productivity.

Estimates are Non-Transferable

Software estimates are not fungible, mainly as a corollary to the fact that team members are not fungible. This means one individual’s estimate can’t be used to predict how long it might take another individual to complete a task.

The transfer ability of estimates is obviously improved when the estimator and the implementer have similar experience levels, and even more so when they work together on the same team. Some techniques, like planning poker, will try to bring in the entire team’s experience when estimating tasks, ensuring estimates don’t miss key considerations known to only some team members or that they’re written as if the fastest coder would be assigned every task. This can help produce estimates, or estimate ranges, that are more likely to be accurate, but it does so by multiplying the time spent on estimating by the entire team’s size.

Estimates are Wrong

Estimates aren’t promises. They’re guesses, and generally the larger the scope and the further in the future the activity being estimated is, the greater the potential error. This is known as the Cone of Uncertainty.

Nobody should be surprised when estimates are wrong; they should be surprised when they are right. If estimates were always accurate, they’d be called exactimates.

Since smaller and more immediate tasks can be estimated more accurately than larger or more future tasks, it makes sense to break tasks down into small pieces. Ideally, individual sub-features that a user can interact with and test should be the unit of measuring progress, and when these are built as vertical slices, it is possible to get rapid feedback on newly developed functionality from the client or product owner. Queuing theory also suggests that throughput increases when the work in the system is small and uniform in size, which further argues in favor of breaking things down into reasonably small and consistent work items.

Estimates of individual work items and projects tend to get more accurate the closer the work is to being completed. The most accurate estimate, like the most accurate weather prediction, tells you about what happened yesterday, not what will happen in the future.

Estimates are Temporary

Estimates are perishable. They have a relatively short shelf-life. A developer might initially estimate that a certain feature will take a week to develop, before the project has started. Three months into the project, a lot has been learned and decided, and that same feature might now take a few hours, or a month, or it might have been dropped from the project altogether due to changes in priorities or direction. In any case, the estimate is of little or perhaps even negative value since so much has potentially changed since it was created.

To address this issue, some teams and development methodologies recommend re-estimating all of the items in the product backlog on a regular basis. However, while this does address the perishable nature of estimates, it tends to exacerbate the waste. Would you rather have your team estimate the same backlog item, half a dozen times, while never actually starting work on it, or would you rather they deliver another feature every week?

We know that estimates tend to grow more accurate the later they’re made (and the closer they are to the work actually being done). Thus, the longer an estimate can be reasonably delayed, the more accurate it is likely to be when it is made. This ties in closely with Lean Software Development’s principle of delaying decisions until the last responsible moment. Estimates, too, should be performed at the last responsible moment, to ensure the highest accuracy and the least need to repeat them.

Estimates are Necessary

Despite of all the drawbacks, estimates are often necessary. Businesses cannot make decisions about whether or not to build software without having some idea of the cost and time involved. Service companies frequently must provide an estimate as part of any proposal they make to build an application or win a project. Just because the above words are true doesn’t magically mean estimates can go away. However, one can better manage expectations and time spent on estimating if everybody involved, from the customer to the project manager to the sales team to the developer, understands these truths when it comes to custom software estimates.

Conclusion

If you’re in a position where you want a reliable estimate for a software project, and you’re having a hard time getting one from your developer/team, remember this quote: “You can’t find someone who knows how long this will take, but you can probably find someone who will lie to you.”

Essentially: The more difficult it is for you to get an estimate, the more likely it is that when you finally do, it’s not terribly accurate.

Recently organizations are embracing DevOps which is a great thing. However the whole adoption is causing a lot of confusion as well. Some of you might have heard the term “Agile and DevOps”. With that it looks like Agile and DevOps are different. To over-simplify further people assume Agile is all about processes (like Scrum and Kanban) and DevOps is all about technical practices like CI, CD, Test Automation and Infrastructure Automation.

This is causing a lot of harm as some organizations now have Agile and DevOps as two separate streams as part of their enterprise Agile transformation. Agile by definitions disrupts silos and you see, in this case people are creating new silos in the name of Agile and DevOps.

With that background in mind, let’s try to understand what exactly DevOps is all about:

  • DevOps is mainly the widening of Agile’s principles to include systems and operations instead of stopping its concerns at code check-in. Apart from working together as a cross-functional team of designer, tester and developer as part of an Agile team, DevOps suggests to add operations as well in the definition of cross-functional team.
  • DevOps strives to focus on the overall service or software fully delivered to the customer instead of simply “working software”.
  • It emphasizes breaking down barriers between developers and operations teams, and getting them to collaborate in a way where they benefit from combined skills.
  • Agile teams used automated build, test automation, Continuous Integration and Continuous Delivery.
  • With DevOps that extended further to “Infrastructure as Code”, configuration management, metrics and monitoring schemes, a tool chain approach to tooling, virtualization and cloud computing to accelerate change in the modern infrastructure world. DevOps brings some tools on the block as well like configuration management (puppet, chef, ansible), orchestration (zookeeper, noah, mesos), monitoring, virtualization and containerization (AWS, OpenStack, vagrant, docker) and many more.
  • So you see DevOps is not a separate concept but a mere extension of Agile to include operations as well in the definition of cross-functional Agile team, collaborate together and work as one team with an objective to delivery software fully to the customer.

Creating separate Agile and DevOps horizontals in any organization just defeats the whole purpose (removing silos) of DevOps.

It depends on what you are passionate about. I would choose passion over all other considerations or else you will burn out and the money won’t help. Any position you pursue in tech is going to require your complete dedication to achieve success.

However, I think the DevOps area has a lot of growth potential in the future. While I cannot speak to your interests I can tell you why I am drawn to the field after being a web based software developer.

I have learned through that if you are operations staff you’d better be automating your job, and if you are a developer you have to face the inevitability of getting down and dirty with operations if you are to stay relevant. Developers who won’t administer/monitor and admins who won’t develop will increasingly become less and less valuable to organizations needing to stay competitive.

DevOps is exciting because you are always working with and integrating new technologies and solving new challenges. Essentially your job is to find a happy balance between operations and developers. This relationship is delicate and can blow up if not regulated. As a devops specialist your job is to integrate these two different mindsets. This requires that aspects of IT be securely shared so that you don’t have the blame game. Developers need to continually push code and operations want to keep everything running smoothly. The more integrated the systems and processes in use, the easier it is for each to do their job.

I personally like to think of IT as three separate phases that all contribute to the ultimate success of the enterprise tech ecosystem; packaging, automation, and scaling.

Packaging:

DevOps is great if you like to explore and work with a variety of technologies and processes. I think one of the first things to consider is the packaging of IT that the tech teams use to provide the organizations products and services. The better packaged and more malleable the packaging the easier it is to keep everything standardized and reusable.

If you like playing with configuration management systems (Puppet, Chef, Ansible, etc…) and digging into imaging systems such as Docker you will like DevOps. I would caution that it is very important to create highly configurable packaging of the IT systems in use so that they can evolve as the organization’s needs change. This also makes it easier to modify for production, QA, staging, and development environments.

If you think about it, the amount of new technologies and services being released into the market is growing exponentially (especially with the add-on potential of all the open source frameworks in existence). In DevOps no technology is off limits and you find yourself continuously working with, integrating, and automating different technologies. As the amount of tech and services grow, so is the demand for people who can put it all together into golden images (configuration managed images on different environments).

Automation:

Your automation potential is only as good as your ability to package the infrastructure in a form machines can work with effectively. If you come from a development background you most likely have had to deal with brittle environments (at least in testing new technologies, which you should be continuously doing).

The DevOps specialist makes it easy for programmers and operations to automate their jobs so that we don’t have to reinvent the wheel over and over again. Ultimately, if the automation is good enough we can realize a scalable architecture (which is the end goal).

You should like scripting a lot. You don’t need to be the best programmer to accomplish this but the more integrated your approach the easier it will be to build on your previous work (which I like). Automation brings the machines to life and if you like seeing a bunch of moving pieces come together to achieve some measurable outcome you will like this part of the job.

I would recommend that you know at least one glue language: Python, Ruby, Go, etc. The more flexible the language, the better. Although, the beauty of automation is that many different languages can be brought together to create a unified system. If something needs to be built for speed, it’s easy enough to design that part in a language like C or Go while allowing other tasks that need more flexibility to be written in a higher level language. You definitely want to become very good at shell scripting which many times ties everything together.

I personally cannot see automation becoming less in demand in the future. The promise of the cloud is built on automation, and enterprise usage of the cloud is growing rapidly throughout organizations of all sizes and types.

Scaling:

If reusability is a passion of yours, I think you would definitely like DevOps. I believe the biggest factor in the successful tech organizations of the future will be their ability to scale rapidly while being able to deflate when not needed to minimize costs in downtime. Customers want speed. They don’t care about the tech behind the application as long as the application is reliable, zippy, and meets their needs.

If you can create packages of IT that can be easily automated in a portable fashion then I think you will have great prospects in the tech world in the future. Companies like Google and Facebook would never have gotten as popular as they are if they had not learned to scale their IT effectively.

Scalability is not easy to achieve and many would rather not have to worry about it, which explains the growth of scalability as a service offerings. But somebody has to know how. Think about the problems of the future: Data analysis, AI, internet of things, mobile consumption, scalable web driven apps, etc. While all of these tech areas require different skills to develop on their own, each is absolute garbage without the same fundamental building blocks. Want to jump from mobile to AI? DevOps could allow that. Want to play with that new SaaS service that is all the rage these days? DevOps can allow that.

DevOps is about being the glue that holds everything and everyone together, and to me that is what makes it so exciting. The possibilities are limitless and the technologies are always growing and evolving. And if you don’t focus on DevOps, you will still have to manage infrastructure as a developer anyway.

When I first started programming I started with a passion for machine learning in C, then over time I started creating web sites using asp.net for my organization. Over time I felt suffocated by the limited nature of the technologies people expected me to work with day in and day out. It wasn’t that I did not like the technologies, but I felt like I was in single technology hell. And once you gain a lot of experience in a specific technology or system people just expect you to focus on that area: recruiters, managers, developers, everyone. With DevOps variety is part of the job description so if you ever feel trapped by technology and find yourself looking to the stars wondering what the hell did I get myself into, DevOps can free you from that limited mindset.

That is why I got into DevOps. I certainly don’t claim to be the best or the most experienced out there, but I am a heck of a lot happier these days. And I am constantly learning new things that I can apply to any new project, whether it is a new AI platform or a mobile application.

At the end of the day your happiness and the passion you feel for what you do is all that really matters.

There are many efficient way to measure the maturity and performance of an agile team and one of them is by measuring cycle time. In a traditional project, we have the project cycle time, which is the time between start of the project and the final delivery or release. In an agile project, the cycle time can be measured on the level of user stories.

Lean: The above reasoning is supported by two main principles of a lean mindset:

Reducing cycling time is one very important principle of a lean mindset: we get faster feedback and test results, we prevent work from clogging our development queues and we improve morale because the team sees that the work ‘disappears’ faster from the backlog.

‘Avoiding waste’ is the primary objective of a lean approach. Any wait time between the different process steps can be considered as waste. Reducing the wait time can therefore be considered as a good way to get rid of ‘waste’, and also reduces the overall cycle time.

How to measure: The following image shows a simplified view of the life cycle of a user story in an agile project, including the potential wait times:

Measuring Agile Maturity 1.gif

The cycle time is the time between ‘Ready to Start’ and ‘Release’. This time can be significantly reduced by reducing the wait times (WT1, WT2 and WT3).

Depending on how we organize our sprint backlog, we can start by noting on each user story card the date on which the story became ready for development by our agile team. This might be the start date of our sprint.

As soon as we meet all the ‘definitions of done’ for that particular user story, note the actual date again on the story card. The difference between the two dates is the user story cycle time.

How to calculate the average cycle time: At the end of each iteration, calculate the average of the cycle times and divide the result by the average complexity of the user stories of the sprint.

The result of this division gives us the average cycle time of that particular sprint. The division by the average complexity is necessary to standardize on complexity.

Over time, when our agile team matures and becomes more efficient, we should see a decrease of the average cycle time. If this is not the case, then we are doing something wrong, and there are probably some bottlenecks in our process that we want to resolve.

Advanced mode: Cycle time variance: Reducing cycle time is one thing, but what about cycle time variance?

Let’s see these two different cycle time-related metrics on a graph:

Measuring Agile Maturity 2.png

The red curve show the distribution of cycle times of team 1, whilst the green curve shows the distribution of cycle times of team 2. The green team has a higher maturity level because of two reasons:

  • The mean/average cycle time CT2 is smaller than the mean/average cycle time of team 1, CT1.
  • The standard deviation of the measured cycle times of team 2 (B) is smaller than the standard deviation of the cycle times of team 1 (A).

This means that over multiple sprints or projects, team 2 is able to produce better predictable output.

Why cycle time is the right indication for maturity:

  • It focuses on the final goal (customer value) and not on the process (the customer does not care).
  • It is very easy to collect the relevant data.
  • It is a very simple and understandable metric.
  • We do not need additional tools, only a pencil and a calendar.

Every project has a certain level of complexity in it. When we say a project is simple, we actually mean that its degree of complexity is very less or can be considered as negligible, but nevertheless, it does exist even if in a minute magnitude. Project management methods deal with project complexities depending upon how flexible they are, and what kind of provisions they offer to deal with them. Tradition management methods such as Waterfall are often rigid owing to their staged working processes, which are also often irreversible. Using pre 1990 era methods, one can try to address the complexity in a project to a lesser or greater degree of success, but typically such management methods do not make it possible to reduce or minify the actual level of difficult in executing the project and neither its complexity. This is not necessarily true in case of Agile. With Agile we can actually try to control the level of complexity in a project provided we have the correct level of experience in implementing Agile principles and techniques.

What is “Complexity” in a project?

Broadly speaking the term ‘complexity’ can be best understood as difficult or complicated conditions arising due to the availability of multiple options, or options which make us simultaneously focus upon different directions at a given time, and which result in a multi-dimensional scenario that is hard to understand and resolve. Complexities can be of different types in a project. Business level complexities arise due to uncertain market conditions, technological advancements, and other such factors which affect the business logic contained in the project. The project level complexities can be of two types: Project complexity and Requirements complexity.

Project complexity

Project related complexity can be different for different types of organizations. Several factors contribute to it; however, the most important ones are uniqueness, limiting factors, and uncertainty.

Uniqueness: Every project is unique and has its own attributes and requirements. As the project commences it gains maturity over time and benefits through the learning process. This is most significant when the organization is running a project which is the first of its kind, or if it has no prior experience dealing with projects.

Limiting factors: Projects are subjected to certain factors which can affect its execution or commencement such as budget constraints, the technical knowhow of the team, working schedule, and at times even cultural differences.

Uncertainty: Uncertainty in a project can be due to external or internal factors. External factors may include government imposed rules and regulations, uncertain market changes, and fluctuating economic climate. Internal factors may comprise of the levels of management’s participation in the project, constantly changing company policies, stakeholder’s involvement in the project, etc. All these factors affect the scope of the project.

It is a known fact that a project’s complexity affects its success. The manner in which a business anticipates, fully understands, and addresses the complexity determines whether a project is going to be successful or not.

Requirements complexity

Requirement analysis is the journey to discover the ‘unknowns’. It is an understanding of the business problem, needs and what it takes to address them. Requirements complexity is defined by two key factors: The level of ‘unknowns’ and Volatility.

The level of ‘unknowns’: At the start of the project, how much is ‘known’ about the problem statement? How much is known about the business processes? The level of ‘unknown’ must be assessed at a very granular level, particularly pertaining to business rules, systems, functions etc.

Volatility: What is the expected level of requirements volatility once the project is launched? ‘Volatility’ in requirements emerges due to frequent changes, starting from the design phase all the way through implementation. Project management methodologies often assume that, when requirements move onto the design phase, they stand ‘complete’ and are not subject to change. However, that is not always the case as there is always some level of uncertainty and unpredictability. Requirements volatility leads to significant risk and its consequent uncertainty.

Can Agile reduce complexity?

In traditional project management methods, the complexity in a project is often managed by investing a certain amount of time in the analysis phase with the sole objective of analyzing the levels of complexity and making plans to deal with it. It is based upon the assumption that the time invested in the analysis activity will help to reduce complexity and increase the chances of developing a successful project by using various methods and processes. The investment in time should be considered worthwhile since the analysis can help managements to make informed decisions.

Unlike traditional project management methods, in Agile there are no special stages to deal with project complexity. The product owner who is responsible and who oversees the entire project tries to address the complexity levels based upon his or her experience in the subject, in addition to what the team can contribute in terms of efforts and suggestion to deal with complexity. However, there is a big plus point in how the Agile process works and how the inherent product incremental model makes it possible to reduce project complexity to a great level.

In Agile we don’t work with the entire project at any given time rather we select a few important features and develop them in short bursts of activity known as sprints. The time spent in developing the features varies from team to team depending upon the team’s level of maturity and its hold over the technology used for developing the project. A project may appear to be complex when its overall complexity is considered, but in Agile since we don’t have to deal with the entire project during the sprints, the levels of complexity can be easily addressed to by estimating the levels of difficulty in the features and developing them individually. That way we only encounter a fraction of the actual complexity, which can be easily tackled by the team.

Project complexity is inevitable and should be acknowledged to enhance the team’s ability to respond and adapt to change while staying focused on the end objective. Agile practices and methodology promote the capability to drive and manage change through an understanding of the inherent complexity in projects.

It is pretty incredible how often we complain about our best employees leaving, and we really do have something to complain about — few things are as costly and disruptive as good people walking out the door.

We tend to blame our turnover problems on everything under the sun while ignoring the crux of the matter: People do not leave jobs; they leave their managers.

Here manager does not mean only the boss or the owner. For a junior developer it can be his senior one. For a senior developer it can be lead developer. For a lead developer it can be development manager. For a development manager it can be scrum master or product owner.

The sad thing is that this can easily be avoided. All that is required is a new perspective and some extra effort on both ends.

Let us discuss few things that send good people packing:

Overwork: Nothing burns good employees out quite like overworking them. It is so tempting to work your best people hard that we frequently fall into this trap. Overworking good employees is perplexing; it makes them feel as if they are being punished for great performance. Overworking employees is also counterproductive.

New research from Stanford shows that productivity per hour declines sharply when the workweek exceeds 50 hours, and productivity drops off so much after 55 hours that you do not get anything out of working more.

If we must increase how much work our talented employees are doing, we better increase their status as well. Talented employees will take on a bigger workload, but they would not stay if their job suffocates them in the process. Raises, promotions, and title-changes are all acceptable ways to increase workload.

If we simply increase workload because people are talented, without changing a thing, they will seek another job that gives them what they deserve.

Not recognize contributions and not reward good work: It is easy to underestimate the power of a pat on the back, especially with top performers who are intrinsically motivated. Everyone likes kudos, none more so than those who work hard and give their all.

We need to communicate with our people to find out what makes them feel good (for some, it is a raise; for others, it is public recognition) and then to reward them for a job well done.

Not care about employees: Study shows that more than half of people who leave their jobs do so because of their relationship with their boss. We need to balance being professional with being human.

We should celebrate an employee’s success; empathize with those going through hard times, and challenge people, even when it hurts. If we fail to really care then we will always have high turnover rates. It is impossible to work for someone nine-plus hours a day, five days a week, fifty two weeks a year when we are not personally involved with each other and do not care about anything other than your production yield.

Not let people pursue their passions: Talented employees are passionate. Providing opportunities for them to pursue their passions improves their productivity and job satisfaction. But sometimes we want people to work within a little box. Sometimes we think that productivity will decline if we let people expand their focus and pursue their passions. This fear is unfounded. Studies show that people who are able to pursue their passions at work experience flow, a euphoric state of mind that is five times more productive than the norm.

Fail to develop people’s skill: Management may have a beginning, but it certainly has no end. When we have a talented employee, it is up to us to keep finding areas in which they can improve to expand their skill set. The most talented employees want feedback — more so than the less talented ones — and it is our job to keep it coming. If we do not, our best people will grow bored and complacent.

Finally, I want to use one line to summarize the whole discussion: We want to create such a team that everyone wants to join.

Theory of the perfect team

Google is known for being obsessed with studying their own performance and then adjust accordingly. At one time Larry Page decided to get rid of all the middle management but then quickly realized that the increased amount of direct reports slowed the company down immediately. So, they brought them back. With its Aristotle project Google wanted to figure out what are great teams made of. At the beginning, they focused all on the individual skills of the team members and their performance but the results were sobering. Having all superstars on the team did not necessarily made a great team. After hundreds of teams studied, a pattern emerged. In all the high performing teams the members felt psychological safety. That means they were not scared to admit an error and talk about their ideas. Furthermore, these teams had all empathy for each other (social sensitivity) and equal voices, which mean that everybody’s opinion is heard when a decision must be made. This does not mean democracy, still somebody can make the decision but people feel that their opinion is considered. For our Scrum teams that means, it is more a cultural challenge rather than a skill challenge and the culture should incorporate those values:

  • Psychological safety
  • Social sensitivity
  • Equal voices

A popular thought for hiring a new employee regarding this: Hire for fit not for skill.

Multifunctional teams

A frequent question I get is: Does multi-functional team mean everybody on the team has to be an expert in everything? This is probably due to 2 facts:

  • We want the team to be independent with a few dependencies as possible to the outside. Therefore, we need all skills inside the team.
  • We advertise that there are no more specialists rather everybody has to be able to help out with other tasks.

Now if there is for example a data base expert on the team, it does make sense that he helps with the database issues but if he has idle time he should help out other team members. On the other hand, when there are a lot of database related tasks and he becomes the bottle neck others should be able to help him out as well.

The approach of A-teams (small engagement units) from the Special Forces is this: Each team member is cross trained. That means he is usually an expert in 1 or 2 specialties e.g. medic, radio, etc. but is also able to do general tasks in other areas. They have what is called T shaped people, a person that is expert in 1 or 2 areas but has general knowledge about all the other topics. Furthermore, every specialty exists twice (in case you lose the medic, there is still another one).

How can we apply that to our Scrum teams? First, it is important that all necessary skills are on the team to be independent. Then, we want every expertise twice, for example 2 persons with highly developed database skills. But we want also T-shaped people that can work on other parts as well (not only their expertise).

Bringing it all together

In sum, it is not so important to have superstars on the team rather than how the team is operating. The number one indicator is psychological safety followed by social sensitivity and equal voices. Secondly, culture has a huge impact on team performance. Even though, culture is probably the most difficult thing to change, moving to a commitment culture is one the most performance improving steps. Lastly, when it comes to who is on the team it is important to have T-shaped people that do have the necessary skills to work independent from the outside. However, they do not have to be superstars.

Takeaways

  • Define team ground rules that lead to psychological safety, social sensitivity and equal voices.
  • Implement a commitment culture.
  • Create a team of T-shaped people covering the necessary skills.

We always feel like our teams could be organized better. How to organize teams in an optimal way is a common question in Agile organizations. A question we should always discuss and answer together with the people in the actual teams.

Here I will try to provide an overview of 5 possibilities for organizing teams and the main factors to take into account. Our main considerations should be based on product complexity and maturity, as well responsibility, coordination efforts and sustainability. Depending on our specific context one will suit our situation better than the other, or maybe we will come to the conclusion that we are so Agile we can leave the concept of team all together.

Component based teams: Component based teams focus on specific components in the product that do not deliver customer value in it. They are also sometimes called focus teams for that reason. The focus is usually necessary because of the use of very specific or exotic techniques. Learning the skills and expertise to work in these teams takes years so it is more logical to put experts together. We can compare this kind of teams with functionally specialized teams.

Feature based teams: Feature based teams focus on a specific customer feature or feature areas within a product. For example: search optimization in a web-shop, or payments and conversion. Feature teams are necessary if the product is too large for a single team to optimize or when a single larger base product is differentiated into various market segments in retrospect. For instance, the same web-shop product can be differentiated for selling kitchen appliances, or tailored for selling TV sets.

Product based teams: Product teams carry responsibility for the entire product or product/market combination. This means all of the skills and expertise necessary to develop and run the product reside within the team. Product teams are therefore suited for smaller products with a relatively small feature set. That is why we often see these types of teams in start-up settings. When developing our new product, we do not want it to be complex and we will have a limited feature set if not a single one we want to validate first. Once the product matures, team orientation eventually changes towards feature teams to excel in feature performance and fulfilling customer needs.

Other factors to take into account: There are a number of other things to take into account when discussing team organization with the teams. These are: responsibility, coordination and sustainability.

Responsibility does not decrease or increase given we work on a component or product as a whole. The person working on the component probably wants to do a good job just as well as the person working on ordering functionality. But the thing is, it is just easier to feel responsible for an end-result. The other question is, if we only have component teams, who feels responsible for the end-result?

Coordination is also something to take into account. The need for coordination increases from product towards component teams. Just think about all the planning, hand-overs and dependencies before a group of component teams manages to deliver the final product.

The final thing is sustainability. What if technology becomes end-of -life? Are we going to fire the entire component team? Do we want to split them up over other teams? Could they adopt a totally new technology, and would not this undermine the whole reason for forming the component team in the first place? Technological advance is moving faster than ever. We know that new tech will come and go so better stay as flexible as possible.

One good conclusion so far can thus be to avoid component teams as much as possible and move from product to feature teams when product scope and/or maturity increase.

Customer journey teams: Customer journeys are becoming more and more important in differentiating our digital product or services from the competition. It is not just the core product that matters, but more and more how customers can experience added value from that digital offering; from problem recognition, to orientation, to acquiring, to aftercare, to discarding or switching to a new offering. Teams aimed towards optimizing the customer journey, can therefore be very effective. Pretty much the same goes for a customer journey teams than for product teams though; if complexity increases we can have multiple teams, but eventually we will need team specialized in certain parts of the journey, specific actors, or channels to differentiate further.

No teams: But wait, there is more! Yes, people, because real Agility would be to self-organize in shapes and forms that simply work. Should not we let go of the whole concept of “team”? Should not we trust people to form groups that just tackle the challenges at hand, no matter what form or shape? Like a swarm of bees or a flock of birds. I sure hope to see this sometime! I would expect to see it in an organization operating in a highly volatile and competitive market where constant change is necessary to keep up.

To recap, our main considerations should be based on product complexity and maturity, as well responsibility, coordination efforts and sustainability. Do we want to push the boundary towards customer journey teams or maybe even dare to let go of teams? Whatever we decide, we should decide it together.