As experts in a fast-moving industry, we're always learning new things to share.
Here's what's on our minds.

The Modern Day Software Architect

By Ned Bauerle

The description of a software architect varies significantly in the IT (Internet Technology) industry. You may hear a colleague introduce themselves with a title like systems architect, senior architect, or even chief architect and you wonder to yourself what does that title mean and do I want it?

You are not alone !!

Many business leaders, human resource departments, even technical management share this job title confusion. There are seldom commonalities in how different company’s define the software architect’s role let alone how software architects are actually utilized within those organizations.

We get so accustomed to the busyness and pandemonium of our daily work, trying to keep up to date with new or updated technologies or projects which are behind schedule that there is seldom a moment of downtime. We never have an opportunity to stop and think about what the role of a software architect should mean in the broad landscape of software development?

Most professions have distinct role definitions and/or education paths. For example an electrical engineer is different than an electrician, a structural architect is different than a builder, a medical assistant is different than a doctor.

Why is there so much confusion in the IT profession architecture role and responsibilities?

To answer this question you have to realize that we are the PIONEERS of this field of study!!! The IT profession is in it’s infancy at only 50 years old. Computer science became a program of study around the year 1960. Electrical engineering, on the other hand, started in the 1880s and structural architecture was established circa 2100 BC.

The earliest of software architects would create detailed UML (Unified Modeling Language) diagrams including detailed class definitions. Typically these diagrams were passed to a development team for implementation. This process worked well for hardware or software that was manufactured or created and delivered on a CD, but changes would either require a new product or a new packaged delivery. The process was slow and methodical, typically following “waterfall-like” project methodologies.

In contrast, the onset of the internet allows for cheap delivery and quick updates to software including software that can be used in a web browser without the need for installation. Software can be developed and deployed so quickly that development teams can tackle small bug and feature releases simultaneously. Agile project methodologies paved the way for quick feedback cycles and smaller chunks of work. Creating concise UML diagrams was no longer necessary and nearly impossible to keep up to date with these small changes. Tools like Enterprise Architect by Sparx Systems were developed to automatically synchronize code and diagrams, but in most cases these types of diagrams were just not needed anymore.

So where does the software architect fit in now? Are architects still needed?

At the 2015 SATURN (Software Engineering Institute (SEI) Architecture Technology User Network Conference). Keynote speaker Gregor Hohpe stated “There is always an architecture; if you don't choose then one is assigned.” He also mentioned that if you do not plan the architecture before embarking on the solution then the architecture you get will most likely not be the one you want.

Today we find that there is confusion about what differentiates an architect from a lead developer as the architect in many cases is either eliminated as a position or integrated with the development team as a lead developer. We have fallen into a pattern where most projects lack a plan (diagram) of the overall architecture that can be used to communicate choices in development. Developers many times focus only on the task(s) at hand (or in sprint) with little thought about how the task(s) fit into the big picture of what is being developed.

We still need software architects!! We just need to rediscover how the software architect fits into project teams and organizations.

While in the “agile” world the desire to create highly functional teams that are empowered to influence the project there is still a need to create a high level technical roadmap in order to communicate progress throughout our project teams. Without it we run the risk of creating a reputation of recommending work or technologies that executives may not fully understand. In those cases teams get pushback because to them it sounds like it might be work that is not necessary.

We need our software architects integrated with the development team where they can evaluate and adjust to changes in business concerns throughout the process. The architect should be involved early in the process to put together a few meaningful (high level) diagrams which can be used for communication both to management teams as well as development teams.

The size of the project, team, and organization all play factors in how the architecture will be created and maintained. Large organizations are starting to restructure to utilize architecture practices that keep a cohesive technical vision for the company which is then used to advise and unify development teams. Small businesses or projects might not have the capacity to create a separate architecture group so the team may include an architect or conduct mindful architecture sessions throughout the development cycle to create and maintain the architecture.

Traits of a Good Architect

Due to the fact that most architecture roles do not presently have college programs and tools (like CAD for structural architects) we must utilize individuals who have deep experiences and expertise with software development such as the following:

Experienced in Computer Programming - A good architect has been through the good and bad implementations and can use this information to plan out decisions based on experience.

Possess an Analytical Nature - A good architect never stops learning. Technology advances so quickly that in order for an architect to be effective they must keep up with the possibilities.

A Good Communicator - Architects generally need to be able to communicate with both technical and non-technical individuals both verbally and visually through diagrams.

Good at Estimation - Architects should be able to scope out and estimate a project typically before implementation has started.

Both a Leader & a Team player - Most of the planning is done up front. An architect needs to be able to allow the team to influence the choices during implementation and be decisive at times.

Technical Facilitator - During the implementation of a project an architect must remain involved and typically will facilitate communication between the development team, project management, and product owners so that everyone can understand the state of the project.

Publications & Research

To truly advance the profession of software architecture we need to find ways to convey both successful and ineffective solutions for the common problems we solve.

Currently there are hundreds if not thousands of projects in progress who are re-creating solutions that have been successful on other projects, perhaps even experiencing the same failed ideas along the way. If we spend our time re-doing the same work over and over rather than sharing our efforts we will advance very slowly indeed.

At this point you may ask “What can I do about it?” and I say to you …
Start blogging,
Start mentoring,
Share your knowledge however possible.

Publish articles or share patterns if you are not able to publish proprietary code.


QA Automation Strategies for ETL

By Dennis Whalen

I recently started a new assignment as the QA lead on a project team that is building an ETL (Extract, Transform, and Load) application.

ETL processes are all about moving data, and do not typically have a user interface. I was excited about a new challenge, but my experience with QA automation has been focused on user-facing applications.

Behavior-driven Development

On past project teams, we have utilized Behavior-driven Development (BDD) to drive the development of user-facing applications. A couple key components of BDD include:

  • creating user scenarios that describe desired application behavior
  • using those scenarios as the basis for development and automated testing

Typically, a team will develop user scenarios by utilizing the 3 amigos perspectives to help clearly define the desired application behavior. The unique perspectives of the product owner, developer, and tester allow us to define application requirements that address business needs and have the necessary detail to build and test.

A user scenario is a concrete example of the application’s desired behavior. Scenarios are written in a business readable language called gherkin. For example, here’s a sample scenario for a Google search:

Scenario: Basic Google search
Given a web browser is at the Google home page
When the user enters a search request for "Leading EDJE"
Then links related to "Leading EDJE" are shown on the results page

The goal of a scenario is to [1] define any preconditions for the test, [2] describe the user action, and [3] define the expected results from the user action. This is similar to Arrange-Act-Assert with until testing:

  • Given – any required setup
  • When – the user interaction we are testing
  • Then – the expected results

Once a scenario is finalized and pulled into a sprint, a developer and tester work together to build and test it. The tester will also build automation components that will be included in the continuous integration (CI) process.

But wait, aren’t we talking about ETL?

This all works fine for an application development effort with a user interface, but I am working on a project that does not have a user interface at all.

Regardless, I still want requirements to be written in a common language that everyone understands and I want to be able to build test automation components based on these requirements.

Can we really define a user scenario when there is no user interface? I think we can.

Scenarios in action

One of the first new ETL processes we looked at was to receive a file of deactivated Stores from a third party. Per the Product Owner, the ETL process needed to validate the filename, apply the contents of the file to the database, and create an Excel file that summarizes the activity. Even without a user, we can write a gherkin user scenario for this:

Scenario: Process a valid deleted Store file
Given a valid deactivated Store file has been provided
When the "Process Deactivated Store" ETL process is run
Then the "Input" folder is empty
And the "Completed" folder contains the file that was processed
And the data in the file has been applied to the database
And the Excel output matches the content of the input file

As usual, the Given statement(s) will do any setup that is required. In this example, we will put the input file in the appropriate location prior to running the test.

The When statement is the action we want to test. The ETL process is what we are testing, as described in the When statement of the scenario.

Finally, the Then statement(s) will described the expected results.

Page Object Design Pattern

One key design pattern for building the automation components in a UI-based testing framework is the Page Object pattern. With this pattern, a class is created for each distinct page or view in the application. With UI tests, the page object encapsulates page locators and page specific logic in a single location. Global page locators and interactions can be further encapsulated in a base Page Object.

We are leveraging this concept with ETL testing. Our initial plan is to develop a ETL Object for each ETL process. As we identity common patterns within the ETL processes we can further encapsulate common methods in a base ETL Object.

So many tools!

There is a vast array of tools that support automated testing. Some examples include:

Our application is architected using the Microsoft stack, with SQL Server Integration Services and SQL Server Reporting Services as the backbone of our solution.

The key tools selected for our test automation are C#, SpecFlow, and Selenium, as they in the same technology stack as the application. Using the same tools for development and test automation allows us to be agile with resource allocation, as the same team members can do development and test automation.


Leveraging BDD with an ETL application development effort has provided a number of benefits:

  • Common understanding - the scenarios are written in common language understood by all.
  • Focused scope - the scenarios describe what the application needs to do, and allows the developer’s work to be driven by the user’s needs.
  • Quick feedback - the scenarios can be automated, which allow them to be included in the CI process, providing immediate notification if something breaks.
  • Quality and agility – successful automated testing gives stakeholders confidence in the application and less fear of application changes.
  • Living documentation – as the application morphs and grows, the scenarios will be kept up-to-date to ensure tests keep passing.

Maybe we do have a user?

Finally, as we built user scenarios for our ETL process it became clear that we DO have a user. The user is the external actor taking action on our application. Sometimes that’s a human user, but in this case the “user” is the batch scheduler requesting the ETL process to be run.

Using BDD and user scenarios to drive application development and automated testing with an ETL process is yielding the same benefits we see with the typical user facing applications.

So you want to build a chat bot...

By Jason Conklin

There are countless articles popping up about how chatbots will replace web sites and change the way people interact with your brand or product. But you need to answer some important questions before diving into the bot world.

What should my bot be able to do?

Real world bots are doing things like answering basic questions (FAQs), ordering pizza, schedule meetings, updating task lists, and booking hotel rooms. While repeatable task automation will see continued bot growth, creating a great bot isn’t always that easy. A bot is only as good as the service it exposes.

Make a list of features you think your bot should offer. How are these features different then using an existing website or app? Will these features bring unexpected customer delight? (hint: they should!)

Where will my bot be located?

A bot can be built for many different channels. It may be hosted on your website, inside Facebook Messenger, Skype, or Slack. The bot could communicate over SMS or E-mail. It could be a voice assistant running on Alexa, or Google Home.

Knowing the channel where your bot will live has a direct impact on what type of content you can be provide. In a visual channel, like the web, a bot can send images and videos, but in a voice channel it will be limited to speech. Some channels will limit the interaction with the user to one session. For example, if the chat is directly on a web page you may have a hard time reaching the user after the chat ends since they may have left your site. However, in a messenger application you can send reminders and notifications long after the original conversation has ended.

A major factor in picking the right channel is determining where users are looking for you. If you have a strong social media following, that channel may be easier to start in. If no one is following you on Facebook, you may have a hard time launching a FB Messenger bot.

How will users interact with my bot?

Think back to the last time you called an automated phone system. You were presented with a menu of options and drilled through a massive tree with no idea what was next. Don’t put your users through that same pain!

Don’t try to be a fake human. If the bot announces itself as a machine, and not a human, the user is more likely to forgive it for not knowing an answer. This also gives you the opportunity to create brand focused personality for your bot.

Users will expect your bot to be a bit more open ended than a traditional web page or input form. Don’t force users into a box or restricted flow. Provide help when needed and allow the user to jump around. The flow of your bot should feel like a conversation and not an interrogation.

How do I design a conversational bot?

In the world of conversational based bots, the user will be talking in their own terms, the bot is expected to figure out what they mean and reply. The bot’s response should also guide the user on what to do next. This could be another question or a list of options. If the user feels stuck or doesn’t know what to do next they will likely leave and not return.

Designing the conversation will likely be your most challenging task. You will need to think about common phrases a user might say and the responses a bot will return. Think about all the different ways the user utters the same sentence. Write down the intent of each of these utterances. Start to group utterances with intents and create bot replies for each of them.

Don’t use terms that are internal to your business, you will need to write the conversation based on how a user talks.

Get two different color sticky notes. Pick a color for the user and a color for the bot. Write out a few different dialog flows between users and bot. Is the dialog too completed? Was the user able to easily accomplish their goal? If you answered no, go back and rewrite the conversation. If you have a hard time drawing the conversation, the user is going to have a hard time too.

Don’t forget to include ways for the user to change their mind and how you would handle unexpected user input. Be on the lookout for areas where users may get stuck. Provide them with a way to start over or transfer to a live person if appropriate.

Where do I go from here?

Now that you defined your feature set, picked a channel, and wrote the dialogs, you should be on your way to building and release your bot!

You should research and interact with other bots. Take note of bots that do things well and times where they struggle. Find a few bots in your target industry and think of ways you would improve them.

Check out these additional resources:

Angular. Is it just a name?

By Bob Fornal

Angular naming conventions ... we now have Angular, Angular JS, Angular 1, Angular 2, and Angular 4, and more to come. To say the least, conversations with coworkers and clients about angular in general has become more challenging. Then, when I had a chance to listen to the Angular Core team talking about Angular 4, they made it simple: Angular is Angular 2 and all versions moving forward while Angular JS is the reference to what we know today as Angular 1. This will be the convention I will be using within this article.

Google's Igor Minar said at the NG-BE 2016 Angular conference in Belgium that Google will jump from version 2 to version 4 so that the number of the upgrade correlates with the Angular version 4 router planned for usage with the release. Minar cautioned against getting too hung up on numbers and advised that the framework simply be called Angular anyway. Let's not call it AngularJS, let's not call it Angular 2, he said, because as we are releasing more and more of these versions, it's going to be super confusing for everybody.

When I had time to dig into Angular (2 at the time) on a real-world production project (in Ionic 2), I found the framework very easy to work with … things just made sense. A great many things came together on this project; what I would have estimated at two to four months of development time was done and ready for real-world testing in under five weeks, while I was learning TypeScript, Angular, and Ionic 2 (mobile development)!

Some of the questions I get asked when presenting on Ionic 2 and Angular relate to whether these frameworks are production ready. In my opinion, Angular is production ready, and while there will be changes and improvements that should be kept up with, the framework is solid. The Google team is working at a fast pace, but they are generating framework code that is being used in production environments, this is the logical conclusion of a framework growing from a solid base. Ionic 2 is no longer in beta, but I would be hesitant to use it as production ready code, unless there is minimal use of device specific functionality. Both frameworks are great for generating Proof-of-Concept code.

Since working with Ionic 2, I have had a chance to listen to talks about React Native (still in the early stages of development) and am interested in learning more about this framework.

Now, on to what I learned from that initial project ...

Working in a framework that does not encourage two-way data binding was unusual at first, but a simple pattern to follow. I learned about Angular Modules, Components, then Templates and Metadata. Metadata is wrapped in a Decorator identifying the type of class being defined, the Metadata provides specific information about the class. When designing the Templates, I found that data binding brought a whole new level to what Angular was able to do.

Data Binding is a mechanism for coordinating parts of a template with parts of a Component.

Binding Example Description<
Interpolation {{ value }} Displays a property value (from Component to DOM)
Property Binding [property]=value Passes the value class=MsoNormal>(from Component to DOM)
Event Binding (event)=function(value) Calls a component method on an event (from DOM to Component)
Two-Way Binding [(ng-model)]=property This is an important form that combines property and event binding in a single notation, using the ngModel directive.

In my opinion, while this methodology is more readable and easier to follow, the best part of data binding is elimination of the need for $scope.$apply or $timeout within my code to handle changing data.

If there was a challenging part, it was learning about Observables and how they can be used effectively. While Observables are not necessary in all cases, I started writing them on my project to get familiar with what they could do, how they would impact code and development.

Having worked extensively with Promises, I know that they handle a single event when an async operation completes or fails. There is more to it than that, but this gives us a starting point when discussing Observables. Both Promises and Observables provide us with abstractions that help deal with the asynchronous nature of our applications.

An Observable allows us to pass zero or more events where the callback is called for each event. Often Observable is preferred over Promise because it provides many of the features of Promise, and more. It does not matter if you want to handle zero, one, or multiple events. You can utilize the same API in both cases.

Observables can also be cancelled, which is an advantage over Promises in most cases. If the result of an HTTP request to a server or some other expensive async operation is not needed anymore, the subscription to an Observable allows the developer to cancel the subscription, while a Promise will eventually call the success or fail callback even when you do not need the notification or the result it provides.

Observables also provide operators like map, forEach, reduce, ... similar to an array.

Suppose that you are building a search function that should instantly show you results as you type. This sounds familiar, but there are a lot of challenges that come with this task. I have seen a lot of creative code over the years to handle the issues that have to be handled here.

  • We do not want to hit the server endpoint every time user presses a key. Basically, we only want to hit it once the user has stopped typing instead of with every keystroke.
  • We also do not want to hit the search endpoint with the same query params for subsequent requests.
  • We also need to deal with out-of-order responses. When we have multiple requests in-flight at the same time we must account for cases where they come back in unexpected order. Imagine we first type computer, stop, a request goes out, we type car, stop, a request goes out. Now we have two requests in-flight. Unfortunately, the request that carries the results for computer comes back after the request that carries the results for car.

Observables make handling these cases easy. In fact, this is one of the primary examples for using Observables at this time.

What I have learned:

  • The naming conventions used by Google will take some time to sink in.
  • Angular is a truly robust framework, production ready.
  • The learning curve when using Angular is minimal since most of the framework is intuitive.
  • Many of the hassles of Angular JS are reworked in a rich way
  • Ionic 2, based on Angular, while fun, is not robust enough at this time for production
  • React Native may be an intriguing solution for mobile development.
  • Data-binding and Observables have come a long way and can take away much developer pain.


What you don't know about the String could hurt you

By Dave Michels

Application development involves a great deal of character data. Almost every business application written today has a great deal of string management performed within it. Not just in the business data that is processed, but also with the strings that are managed or maintained by the application for aspects like labels or managing a dictionary for spell check. All of this string data consumes memory, and potentially a lot of it.

One of the ways that modern development languages have evolved to optimize their runtimes is the concept of “interning” or “pooling” strings. Many runtimes like the JVM and CLR every string is immutable. Meaning, once a string is created, it cannot be changed. This is done in order to allow strings to be “pooled” and have multiple objects point to the same string if it exists in the internal pool. This is why practices such as regular string concatenation within a loop is inefficient as opposed to using a class like StringBuilder. Strings are concatenated a new string is created and pooled with each concatenation. These implementations for immutable strings are based primarily on efficiencies for the runtime. By taking these strings and caching them in memory there can be multiple different references to the same string throughout a program when a string is created. This has a great deal of benefit from an optimization perspective.

However, these cached strings held in memory of the runtime process and that is a bad thing when it comes to sensitive data such as passwords or social security numbers. The moment a string is created it is “interned” and added to the string pool for the runtime environment. What this means is that for individuals that have nefarious intents, memory scraping or dumping a process’s core can result in a plethora of valuable data that you may not realize this hanging around in your application. To illustrate I’ve created a simple Java program and forced it to dump core using the kill command.

I setup my terminal to allow core dumps on my mac:


I’ll run my basic Java app:

Run Application

It’s easy to find running JVM apps via a simple process list command:

Find JVM

Then send a signal to the process to have it dump the core:

Find JVM

We can see the core dump in the /cores directory (in the default location for BSD/macOS):


Now it’s just a matter of running the “strings” or “gstrings” using the Homebrew binutils package and redirecting the output to a file (jvm-coredump-strings.txt in this case):


Obviously, looking at these files, they are quite large – 6 GB for the core file and 127MB for the text file. However, they don’t need to be around long enough to make them conspicuous the core file can be removed immediately after creation leaving the much smaller text file. Even then the text file needs to be around just long enough to be moved off the machine or have it combed through for any interesting data. I realize this is the brute force way of illustrating this approach. I realize arguments can be made regarding the security measures modern OS’s employ to prevent this type of thing. But the point is that this is an approach that can be taken to compromise your application’s data and people more clever and nefarious than I can find ways to exploit an approach such as this. By employing a couple common practices developers can lesson or negate this approach.

Whenever possible try to treat sensitive character data as character or byte arrays to allow for this data to be garbage collected and easily overwritten. The .NET framework has a SecureString to deal with just this type of sensitive character data. This can effectively eliminate the resident sensitive data from memory. Also, beware of sensitive data being transformed into strings by convenience frameworks by calling toString() or ToString() in .NET. Serialization frameworks are notorious for this (transforming to JSON or XML this is done quite frequently). Sometimes this is unavoidable, but you can help obfuscate this by not assigning this data to local member attributes of classes with names such as “Password” or something of the like that can be easily searched.

We dev’s deal with strings all day every day. It’s the most intuitive way to transmit, debug, and store application data. However, with system security breaches becoming ever more prevalent, we should always keep in mind the sensitivity of the data that we are using within our applications and exactly how the runtime manages it.

Analysts - Tools and Traits

By Brett Gerke

The purpose of this article is to describe tools and skills desired for Analysts. If you have a position as an analyst or want a position of an analyst, you will want to read on.

First, why do we make distinction between BAs (Business Analysts), BSAs (Business SYSTEMS Analysts), RAs (Requirements Analysts), or SAs (Solutions Analysts or Systems Analysts)? The job appears to be the same, names appear to be altered to reflect: attitudes; business politics; focus; or (in my opinion) provide confusion and ambiguity. Here, the overarching term 'analyst' will be used. To further level set, the term 'customer' is the person, organization, or other that will be using the -product we are producing or enhancing. They can be called 'Product Owners', or Users.

Basic Analyst Tools

Some skills and tools for analysts are basic. They can be learned from taking classes, IIBA certification, on-the-job training or by reading the plethora of books on the subject. There are two tools you must start with. Consider these the minimum set of expectations. The first tool is elicitation, finding out what the customer wants / needs. The basics are simple: Most customers want to do THEIR JOB, More, Better, Faster. Beyond this, a more honed idea of their needs / wants are what elicitation is all about. Here you deal with the YEABUTs (Yea, I do want that, but...), and the I KNOW IT WHEN I SEE ITs (Just show me what you think I need and I'll tell you if that's what I want). It is important not to give up, press on to understanding. To do this you will need some of the skills described later.

Articulation of specifications and requirements - produce artifacts that will communicate to the customer, and the technical team, the needs / wants for the system. In the analyst world, there are formats and conventions for producing artifacts. These are forms of communication using visuals, audio, and demonstrations. Some examples: Story Cards; Actor Catalogs; Use Cases; Process Flows. For more of the skills that you can learn to do your job, please refer to classes, IIBAs BABOK, on-the-job training and books.

Basic Soft Skills

There are experiential skills to be learned and honed. They can't be learned from classes or books (no matter what the author says). These are 'soft skills'.

As an analyst, a big part of your job is to build relationships. These relationships are critical to your success. Failure to form relationships (or destroy relationships) can sound a death toll on your job and possibly your career. Everyone you come in contact should be opportunities to form relationships. Here are several soft skills you need to examine and improve. Political skills - knowing relationships that exist, the nature of these relations and the reasons they exist. Understanding if and where YOU fit into relationships is key. Many organizations have a hierarchy that should be appreciated. Titles, functions, seniorities are all important to forming, nurturing, and understanding your relationship within an organization. Public speaking is another soft skill to pay attention to. Your job requires that you are able to clearly present opinions, facts, and theories to your team and outside your team. Being able to put a presentation together and effectively communicate verbally is a necessity. Relationships can be started or furthered by an effective presentation.

The ability to communicate through visuals is a great skill to have. This is not difficult if you do some homework and expose yourself to other's visualization techniques. Then copy them! Don't feel you are required to innovate. The important idea here is to create visualization that 'speaks to the audience', 'tells the story'. In other words provides not just information, but the context of the information. A great series of books on this subject are by Edward Tuffte, get them from your library.

One skill that shouldn't need to be listed, but is the Achilles Heal of many analysts -- listening skills. Learn to not only hear what is being said, but understand what is said. Stay off your cell phone, email or other distractions. Be engaged when talking to people. React to what you heard, ask questions, summarize, one suggestion is to use the term '...let me teach back what I understand...'.

As a final point, don’t take things personally. Understand that people are not perfect, if you do your best at forming relationships, and it fails, count your wins and learn from the rest. In other words, have a 'Thick Skin'.

Skills that separate you from the pack

There are other experiential tools / skills that can be real game changers. Most analysts that acquired these have built real success patterns. Once again, these cannot be taught, or acquired from books. You must use them, concentrate on them, and be critical of yourself to sharpen these skills.

Being able to argue is NOT the same as being able to persuade. Persuasion is an art, and depending on the person and the situation an take on several forms. An analyst that is skilled at persuasion can quickly become a change agent, and a leader. Persuasion allows you to 'argue' without potentially offending your audience.

Flexibility is a critical skill to build. To be able to bend without compromising your beliefs, ethics, and values, makes you a person that can build relationships with the most difficult people. The first step of building flexibility is to have a good grasp on your beliefs, ethics, and values. The second step is to understand how others differ from you. Try to empathize with them, possibly sympathize with them. Lastly, find a way to fit into their version of the world while keeping your integrity. HONESTY, this is in capital letters because it may be the most key skill everyone should have. The difficulty comes when you are around others that are less than honest.

When writing a dissertation, the academic advisor gave the following advice, “Remember, all we have are our words. Make them count”. This is great advice for all writing and speaking. In the profession of Analyst, this is not only great advice, it is a requirement. Good communications goes well beyond the ability to write, use standard formats, create great visualizations, and good grammar. The tools / skills must involve using the correct LANGUAGE to communicate ideas. Analysts must be able to understand and use the correct words. There are three important languages analysts should concentrate: Language of the organization; Language of the domain; and vernaculars.

Every organization has their own way to express certain ideas. Some organizations use abbreviations to communicate, this language is a way to articulate 'tribal knowledge' (internal knowledge held by those within the organization). Some terms used by organizations are vernaculars. For example: when discussing the moving of a vehicle, one organization uses the vernacular ‘traction' and another uses 'travel'. When discussing a heart condition one healthcare organization may describe the condition as 'AFib' another uses 'atrial fibrillation' for the same condition.

Learning to 'speak' the organization's language helps build relationships within the organization. There is one great way to learn the language of the organization, listen. Listen in meetings, ask questions when you don't understand, and then USE the organization's language to communicate within the organization. Do your best to fit in and you will improve your communications.

Effective communication also requires a knowledge of 'Language of the Domain' the organization identifies. There are standard languages that hold a uniform meaning within a domain. It allows effective communications between organizations within the domain. We don't need to go very far to get examples for Language of the Domain. Analysts use words that are accepted by all analysts to mean something. Use Cases, Requirements, Stories each express a commonly understood concept.

One of the 'must haves' for an analyst is 'Critical Thinking Skills'. The interesting thing is that this is a commonly misunderstood skill. Let's start with the skill that is often confused with critical thinking - analytical thinking. Analytical thinking is the ability to break complexity into the component parts. Critical thinking is the ability to further learning by questioning EVERYTHING. Critical thinking starts as a young child that constantly asks... 'WHY?' Look for proof of ideas, concepts, and statements of fact. Here is a bullet point list of several components of critical thinking.

  • Understand links between ideas
  • Determine the importance and relevance of arguments and ideas
  • Recognize, build and appraise arguments
  • Identify inconsistencies and errors in reasoning
  • Approach problems in a consistent and systematic way
  • Reflect on the justification of own assumptions, beliefs and values

Use this list to start improving your Critical Thinking Skills.

Always be open to grow…never stop growing and learning.

Intro to Augmented Reality

By Tom Hartz


1. make (something) greater by adding to it; increase.

Augmented Reality is an evolving set of technologies with the potential to improve our lives in a variety of ways. To define it succinctly, AR is the rendering of digital information superimposed onto one’s view of the physical world. You’ve seen it before; a prime example being the down markers on football TV broadcasts.

down markers

Why care about AR?

Until recently, the prospect of seamlessly blending the physical and digital anywhere you want has remained in the realm of Science Fiction. AR has in fact existed in various forms dating back to the 1960s, but none of the implementations of the past have been portable or very practical for consumer use. However, we are now witnessing this technology become mainstream, due primarily to the proliferation of mobile hardware. The successive iterations of the smart phone market have driven us towards having a compact, low cost, powerful set of sensors and display residing in nearly everyone’s pocket. We are at an unprecedented level of hardware saturation, enabling some really compelling AR applications, and we haven’t even seen the endgame yet. Will wearables replace all our smartphones? What does the next generation of mobile computing look like?

Unwinding the future possibilities is endless and entertaining, but I digress. Right now in the current mobile app landscape, AR is getting big! Lots of cool apps exist today that utilize computer vision and tracking algorithms to do all manner of neat things. If you haven’t seen these, take a few moments to check out some links:

  1. IKEA Catalog App - place and view furniture pieces in your living room
  2. Snapchat - face tracking with fun meme-ery
  3. HoloLens - featured at the 2016 Build conference running a Vuforia app

Just to name a few standout examples. While there are many apps already applying this technology, there is still plenty of room left for innovation and creative new ideas.

Diving into AR Development

I first became interested in AR when I attended the M3 Conference a few years ago, and heard a keynote presentation from Trak Lord of Metaio. I was inspired, so I looked around and found a plethora of platform options for building AR apps. From my own research, I can assert that the Vuforia SDK has the easiest learning curve today. I have used this toolkit to build one demo application for a paying client, a few internal company prototype apps, and many just-for-fun personal projects as well.

Looking back I am glad I didn’t invest much time learning the Metaio SDK. They were acquired by Apple in 2015, and have since shut down all public support. Apple has been very quiet about the acquisition, not releasing any news about what they are doing with the technology. Clearly they are looking to innovate in the AR space and are doing some internal R&D right now. Personally, I am excited to see what they come up with, and wonder what built in AR features the next iPhone may have!

Vuforia History

Vuforia began as an in-house R&D project at Qualcomm, a telecommunication and chip-making company. At the time, the company was looking for computationally intensive apps to showcase the prowess of their Snapdragon mobile processors. Nothing flashy enough for them existed on the app market, so they decided to push the boundary and create some new software on their own. They built the Vuforia base SDK and launched it as an open source extension of the AR Toolkit.

Since the initial inception, Qualcomm augmented their base SDK with a variety of tracking enhancements and other proprietary features. To sustain the project long term, they migrated away from the open source model, and they eventually sold ownership of the library to software company PTC. Unlike the sale of Metaio’s SDK to Apple, this transfer kept support very much alive for its development community. Since then, Vuforia has grown to be (one of) the premier Augmented Reality SDKs, used by hundreds of thousands of developers worldwide.

Using the Tools

Apps using the Vuforia framework today require a license key. Deployment licensing options start at a reasonably low price, and prototype “Starter” apps can be developed free of charge! You can create an unlimited number of prototype applications at no cost via their Developer Portal.

Building custom AR experiences necessitates thinking in 3D, and having great tools goes a long way in easing that burden of complexity. The Unity 3D game engine is a very intuitive environment for editing scenes in 3D, and it’s scripting engine uses C#, making it a fantastic choice for developers who are versed in .NET. To me, the best part of the Vuforia SDK is the Unity plug-in. It enables you to build AR applications, without writing any code at all, that can run on pretty much any mobile phone or tablet.

Putting together a marker-based AR app is incredibly easy with these tools. If you have no experience working in Unity, there will be a learning curve involved. A good primer for understanding the Project, Hierarchy, Scene, and Inspector panels can be found here. Once you are familiar with the tools, building AR apps is easy and a lot of fun! Below is a short list of steps to exemplify how quickly you can get an AR app up and running on a webcam enabled machine. Not included here are the steps for deployment to a mobile device (a topic for another day).

  1. Create New Project in Unity (use 3D settings).
  2. Delete Main Camera from the scene.
  3. Import Vuforia Unity Package (downloaded from the Developer Portal).
  4. Import target database (downloaded from the Developer Portal).
  5. Add two prefabs to the scene from Vufora Assets: ARCamera and ImageTarget.
  6. Select ARCamera in the scene hierarchy. In the Inspector Panel, paste in the App Key (created via Developer Portal), then Load and Activate image target database (two checkboxes).
  7. Select ImageTarget in the scene hierarchy. In the Inspector Panel, select from the dropdowns for Database and Image Target (stones).
  8. Import a 3D model asset to the project (drag and drop from file system into Unity).
  9. Add the model asset to the scene as a child object of the Image Target.
  10. Center and resize model as needed to cover the Image Target.

You can download my completed example Unity project from Github

AR Tom

Custom Authorization Filters in ASP.NET Web API

By Chad Young

The ASP.NET Web API framework is a great choice for those that want a lightweight Service Oriented Architecture (SOA) to facilitate passing XML, JSON, BSON, and form-urlencoded data back and forth with a client application. Inevitably, you’ll need to secure at least some of the endpoints.

At a minimum you’ll need to have some sort of Authentication and Authorization mechanism in place.

  • Authentication: The process of confirming that user is who they say they are.
  • Authorization: The process of determining if the authenticated user has the proper roles/permissions to access a piece of functionality.

In Web API, the message pipeline looks something like this:

code output

As the picture illustrates, you can handle authentication in 2 places. A host (IIS) HttpModule can handle authentication or you can write your own HttpMessageHandler. There are pros and cons to both but the main focus of this article is to discuss custom authorization filters that occur next in the pipeline after a user has been authenticated. Once authenticated by an HttpModule or a custom HttpMessageHandler an IPrincipal object has been set. This object represents both the user that authenticated and certain role membership information. Some applications have their own custom role and permission implementations which is where custom authorization attributes become useful.

Authorization filters are attributes that are used to decorate applications and can be applied at three different levels:

  1. Globally: In the WebApiConfig class you can add:
    		public static class WebApiConfig
    			public static void Register(HttpConfiguration config)
    				config.Filters.Add(new MyCustomAuthorizationAttribute());
  2. At the controller level:
    		public class AController {}
  3. Or at the endpoint level:
    		public class AController
    			public async Task<HttpResponseMessage> AnEndpoint() { return null;}

The code associated with each attribute gets executed in the same order as they are listed above, so you can nest functionality if need be. They are also inheritable so you can put an attribute on a base class and it will be inherited by any controller that inherits from it. The exception to this is the built in AllowAnonymous attribute which can be applied and will short circuit the need for authorization.

When the AuthorizeAttribute is encountered the public method OnAuthorization is executed. The base implementation is below:

		public override void OnAuthorization(HttpActionContext actionContext)
		    if (actionContext == null)
		        throw Error.ArgumentNull("actionContext");

		    if (SkipAuthorization(actionContext))

		    if (!IsAuthorized(actionContext))

As you can see an error is thrown if there is no action context. Then SkipAuthorization is called to see if an AllowAnonymous attribute is encountered and the authorization process should be skipped. Finally, IsAuthorized is called and if it fails the HandleUnauthorizedRequest is called. The overridable methods on this attribute are IsAuthorized, HandleUnauthorizedRequest and OnAuthorization. There are a couple of ways a solution could be implemented. You could override OnAuthorization and set the response message. The best solution to the scenarios I’ve run into is to override the IsAuthorized method and let the OnAuthorization method perform its base execution. I think the names of the methods also indicate that in the scenario of determining if a user is authorized that the IsAuthorized method makes more sense.

Below is the very high level code representing the custom filter implementation:

		public class MyCustomAuthorizationFilter : AuthorizeAttribute
		    protected override bool IsAuthorized(HttpActionContext actionContext)
		     	if (!base.IsAuthorized(actionContext)) return false;

		        var isAuthorized = true;

		        //Do some work here to determine if the user has the correct permissions to
		        //be authorized anywhere this attribute is used. Assume the username is how
		        //you'd link back to a custom user permission scheme.
		        var username = HttpContext.Current.User.Identity.Name;

		        isAuthorized = username == "AValidUsername";

		        return isAuthorized;

There are a couple of things that should be pointed out about the above code. First the attribute inherits from AuthorizeAttribute which is in the System.Web.Http namespace. The namespace System.Web.Mvc also contains an AuthorizeAttribute which has similar behavior for the MVC framework but they are not compatible. All of the magic happens in the overridden function IsAuthorized. Here you have access to the HttpActionContext where you will have access to the Request, request header values, ControllerContext, ModelState as well as the Response. Within the IsAuthorized function is where any work to decide if the user is authorized is done. If the user is not authorized (IsAuthorized returns false) the response message is set with (Unauthorized) and returned. If custom processing of the request is needed you could then override the HandleUnauthorizedRequest method in the case of an unauthorized user.

Proper use of these attributes can clean up your code so that authorization is a separate concern from the functionality of the endpoint itself. It also allows you to completely customize your role/permissions architecture as well.

Effectively Documenting a Development Project

By Keith Wedinger

Over the course of my 26+ year career as a software developer, I've had the opportunity to work on many software projects for several companies. More often than not, I joined a team that was working on a project that started some time ago. When joining an ongoing project, one of the challenges I often faced was getting up to speed and being productive as quickly as possible. What made this significantly more difficult was the lack of good development project documentation. So, how do we solve this? Ultimately, new members are going to be brought on to help with a project. Or in consulting, the project is nearly complete and it is time to transition ongoing development and support of the project to our client. This is where effective development project documentation is an essential tool.

So, what constitutes effective development project documentation? The goal is to get new team members or the client on-boarded and productive with the project as quickly and as painlessly as possible with minimal assistance from current team members. Depending upon the complexity of the project, a good onboarding measurement to shoot for is 1 day. So how does one achieve this goal? Effective project documentation must cover these essentials and it must be as specific as possible. Ambiguity is not a good thing. And strive to limit the choices where possible.

  • Include contact information for all team members and their key responsibilities. This lets everyone know who to contact with questions.
  • What operating system and version is needed? In most cases, development workstations will be provided to team members with the necessary OS.
  • If the project involves mobile development, what mobile devices are needed and where does one request/acquire the devices? Ideally, the necessary hardware will be readily available.
  • What SDKs and/or JDKs and versions are needed? Include download links.
  • What IDE and version is needed? Again, include a download link. If a license needs to be purchased or acquired before the IDE is downloaded and installed, clearly document how this is done. Ideally, the necessary licenses will already be purchased and ready to use.
  • How is the IDE configured to conform to project development standards? Every modern IDE has export functionality that allows settings to be exported and then, imported. Leverage this to make IDE configuration that conforms to the standards practically foolproof.
  • What version control system is being used and how is access to it requested? The popular choices are Git, Subversion and CVS. Include any contact information and/or instructions required to request access. Ideally, access to version control will be set up prior to onboarding.
  • What version control client software and version is needed? Remember those download links.
  • What build tool software and version is needed? Some IDEs like Xcode do not require separate build tool software. Some do. Examples include Ant, Maven and Gradle. Don't forget those download links. Also include any step by step instructions required to install the build tool software because nearly every build tool requires some OS specific configuration to be done.
  • What dependency management server is used and how is access to it requested? Build tools like Maven and Gradle depend upon a dependency management server to download the necessary dependencies to build a project. And most corporations concerned with software licensing host a dependency management server in house to control what libraries can be used to develop and build projects. Include any step by step instructions required to configure the IDE and/or build tools to use the dependency management server.
  • Once everything above installed and configured, how is the project pulled from source control, built, tested, installed/deployed and executed? Provide repeatable step-by-step instructions.
  • What standards and/or guidelines are followed when developing, testing, and committing changes?
  • Include an FAQ section that includes answers to commonly encountered questions or problems.

Over time, project changes will require changes to the project documentation. So, make sure that the project documentation is always kept up to date.

When teams spend the time to effectively document their project as described above, it enables effective and efficient onboarding of and knowledge transfer to new team members and clients which allows them to get to the business of being productive as quickly as possible.

Some Musings about Embedded Application Development

By Bill Churchill

With the ubiquity of ARM processors and *nix distributions running on them, embedded application development more closely resembles desktop or server application development. No longer is it mandated that an application be their own operating system. Anyone who has done application development in a *nix environment can develop for an embedded linux appliance. However, there are some significant differences that change the approach of development in these environments. Right after application design, hardware constraints dominate a developer’s thoughts.

An embedded device would have a limited number of processes running at any one time and each must be a good team player. No single application can monopolize any resource on the device. Using a smart phone as an example, if you have an application take up all the memory or processing cycles, the other applications will cease to perform in a responsive manner. You may not even be able to make calls or text until a reboot clears the problem.

Memory is a major limitation. In a PC environment, one typically has ample memory. Even when this is not the case, you still have the option of upgrading the memory. While it may be possible to run a memory managed vm like Java on your device, your performance can vary greatly. Typically, the choice of languages for a new application will be closer to the metal (C or C++). This allows finer grained control over memory allocation. Embedded applications usually grab any memory it would need at startup. This prevents out of memory exceptions or application halts for garbage collection (if a managed environment is used). These types of errors can hide during development and QA but rear its ugly head in the field.

While SD cards are growing in capacity, the root disk space is still very limited. Often the SD card is not used, as EPROM may be preferred. Small, tight libraries are extremely important in this type of development. The ubiquity of BusyBox on embedded devices illustrates this. Even within the application code, keeping things small and simple is important. Many systems load the entire root filesystem into memory. Another side benefit of using small libraries is the inherent small memory footprint when the application is loaded.

This is by no means an all inclusive list. Hopefully it will assist any developers looking to make the leap to embedded development. As phones become more sophisticated and wearables become more common, now is a good time to look at embedded application development for fun and/or profit.

Augmenting Architectures with a Service Proxy

By Trent Brown

You've committed to implementing a modular, service-oriented architecture (or in today's parlance a micro-service architecture). Business logic will be broken down into small, discrete components and exposed as REST services. The promise of scalable, flexible and adaptable software applications is at hand! While this approach to designing solutions has many of the advantages being hyped there is at least one downside; applications consuming all of these distributed services have to keep track of them. Services being consumed could be deployed across multiple servers in different data centers and increasingly in the cloud. One approach to taming the complexity for service consumers is the use of a service proxy.

The service proxy does much of what a traditional proxy server would do, sit in front of destination resources and provide routing and filtering. In addition to these basic functions, a service proxy can also handle authentication and authorization, providing single sign-on to secured endpoints. There are open source service proxies, such as Repose, that will support defined authentication schemes out of the box. Repose will support Open Stack Identity Service as well as Rackspace Identity Service with minimal configuration. Additionally, its modular, pluggable architecture allows for developing a custom filter to integrate with any other authentication/authorization provider.

The proxy can also provide vital protection to the lower layers in the system stack by serving as a circuit breaker, detecting load outside the normal range and preventing requests from flooding downstream. A sufficiently advanced proxy will perform this rate limiting in an intelligent way by cutting off access only for abusive users or IP addresses while maintaining service for other consumers.

Service Proxy

But wait, there's more! A service proxy can also provide a wealth of information about how services are being utilized. The proxy can generate logging that can be streamed to an analytics engine to provide insights into usage patterns. This can allow for better allocation of infrastructure resources as well as understanding how services are being utilized. Understanding which services are most valuable to consumers can inform the design of future services.

Besides the Repose proxy mentioned above, Netflix has open sourced the Zuul service proxy. This proxy integrates with other open source tools provided by Netflix to manage a micro service architecture. If you are interested in the details on some of the cool stuff Netflix has released to the community, check out their Github repo here.

Project Management 101: Time Management

By Michael Zachman

Here's the situation: You get to work and you have all of these tasks that you need to get done. You feel confident that you will get them accomplished, so you begin your day. You are fifteen minutes into your first task and the phone rings. You finish the call ten minutes later and think to yourself, "Now, where was I?" It takes you about five minutes to get back to where you were and an Instant Message pops up from a co-worker which is "urgent". You help them and are about to get back at it when someone pops their head into your cubicle and says, "Do you have a minute?" You want to respond with an emphatic "NO!" but you do the right thing and help them with their issue. The day continues on in this fashion and before you know it, you look up and it is time to go home. You have accomplished nothing on your list of tasks and you wonder, "Where did the day go?" Does this kind of day sound familiar to you?

It is very easy to get distracted into today's world especially with all of the technological advances over the last 25 years (i.e. Microsoft Lync, Instant Messaging, Online Meeting Sites, Video Conferencing, Email, Texting, etc). You have to be able to manage your time effectively and efficiently to maximize your productivity. Don't get too discouraged because there are many things that you can do to manage your time better and have a productive day. Here are a few tips to help you improve those time management skills:

  1. Set specific times to check your Email – Emails come in from the beginning to the end of the day and can consume all of your time if you let it. One thing that I have found helps is to set specific times of the day to check your email. I check my email first thing in the morning and first thing when I get back from lunch. I respond to emails at those times and ONLY during those times. Unless there are specific reasons to deviate from this, you will find that you get a lot more done when you have more extended periods of time to focus on the tasks at hand. This can also be applied to voicemails!
  2. Do Not Disturb! – If you absolutely have to get work done, put up a "Do Not Disturb" sign up so that everyone knows you need to focus. Again, in today's world, just putting up a sign outside your cubicle or office may not be enough. You may need to set all of your technical communication devices to "I'm Busy" as well. Set your Instant Messaging status to "Busy" or "Do Not Disturb". Close your email and turn off your smart phone so that you are not interrupted by emails and texts if the work you are trying to complete is that important. I have found that people are usually very understanding about this but believe me, if they really need you, they will find you.
  3. The 80/20 Rule – By the numbers, the 80/20 rule means that 80 percent of your outputs come from 20 percent of your inputs. Well, I think this can be applied to your time management skills: 20% of your actions, discussions, and thought processes produce 80% of your results. It is impossible to get everything done because there will ALWAYS be something to do. But if we prioritize our tasks and stay focused despite the inevitable interruptions, the 80/20 rule tells us that we will produce results. So don't stress out when you don't get everything on your list done and remember that based on the 80/20 rule, you have probably accomplished a lot more than you realize.

So, the next time you have a day where you don't feel like you have accomplished everything you wanted to accomplish, try using these 3 time management tips to maximize your daily productivity by investing your time wisely. You may just find that you are a little less stressed, a little more focused and have produced a lot more results than you ever realized possible.

"The key is in not spending time, but in investing it." – Stephen R. Covey

Use Cases for IaaS

By Dave Michels

One of the most pivotal advancements in information technology in the last 3 years is the advent of Infrastructure as a Service (IaaS). By definition, IaaS is a provision model in which an organization outsources the equipment used to support operations, including storage, hardware, servers, and networking components. The service provider owns the equipment and is responsible for housing, running and maintaining it. The client typically pays on a per-use basis. The most commonly known model for this is in the Amazon cloud, but other cloud providers such as Windows Azure also support IaaS as part of their offering.

IaaS is most easily justified in a scenario where an organization is just starting their IT services or products. As the provider hosts the physical infrastructure, there is little to no physical infrastructure costs incurred by the client. Building or leasing a data center is a costly endeavor if the resources required in order to get IT up and running is not known initially. An entire organization's physical infrastructure can be designed and setup remotely on the providers infrastructure. The key element here is that there should be thought put into what the logical structure of the network, subnets, firewalls, servers, segmentation, etc will need to be. Organizations should ensure that the virtual infrastructure that is to be setup is designed based on requirements for the services needed as to not waste time and effort. That being said, part of the benefit of using this model is that any issues or refactoring required is typically not nearly as costly as it would be if physical hardware were purchased, setup, cabled, configured only to find that it was not the best approach. With IaaS, tearing down and rebuilding is a matter of clicking through a management console to disable or delete existing servers, routes, and other virtual infrastructure then re-create based on new requirements.

Many larger organizations have already made a significant investment in their IT infrastructure. When a company invests hundreds of thousands or even millions of dollars in physical infrastructure such as servers, switches, SANs, and load balancers the case for IaaS is different. An important point to note is that IaaS should be looked at as a complementary aspect of existing infrastructure and not a means to replace it. The case where this is most evident is in transient infrastructure needs to non-persistent application requirements. For example, in an IT development shop that practices Continuous Integration or CI, the need for resources to execute builds and run automated unit and integration tests can be a bottle neck. If there are 50 different builds that are configured to execute based on changes detected from revision control a team may wait hours to receive build feedback or have deployment artifacts generated if there are only 3 servers that can execute builds in your CI environment. Now, that wait would vary based on the size of the projects, compilation time, and tests to be executed, but there is a bottleneck nonetheless. As the resources required to accommodate this transient spike in builds to be executed are expensive and time consuming to procure, it makes little sense to occupy valuable internal server resources for servers that may only be needed a small percentage of the time. Many CI server platforms already have plug-ins that support existing IaaS providers that will start cloud based servers as needed to accommodate this spike in build requirements, then shut them down when the build queue reaches a minimum threshold. The cost incurred is only the time that the server was running to execute builds and deplete the build queue.

The same case can be made in regards to automated load and functional testing. IaaS is ideally suited for the case of automated load test and simulation of production load distribution for public or globally distributed applications. Often times, large organizations have multiple data centers but they may not be geographically distributed far enough to come close to accurately simulating what their production load will look like. Using cloud based IaaS organizations can spin up virtualized servers in datacenters that can be dispersed throughout the world and coordinate tests that can simulate large traffic volumes coming from Asia, Europe, and South America hitting a site in your corporate datacenter in Columbus, OH. Again, once the tests are complete the services can be shut down and the costs incurred are minimal based on the length of time the tests ran and the servers were running.

Cloud providers that support IaaS, should be looked at as a tool that can easily supplement IT infrastructure for transient resources. Saving valuable internal hardware and infrastructure resources and more effectively simulating real-world conditions that cannot be easily replicated with existing resources. This translates into cost savings for what would otherwise be idle servers and better simulation of real-world production environments.

Lessons from the Battlefield - Stakeholder Management

By Erica Krumlauf

When you think of the most desired skills of a project manager, where does stakeholder management fall? I often find that this skill is one that falls low on the list for many organizations. They don't call it out as a required skill on their project manager job descriptions, and don't ask tough questions on how to deal with challenging stakeholders during job interviews. Why is this skill often overlooked? Is it that project managers and teams believe that stakeholders aren't important members of the project? Or do they believe you can overcome a ‘bad' stakeholder if you just push them out of the way and ignore them?

In my vast experience in program and project management of client projects, I have worked with a multitude of stakeholders; each of which carried different personalities. I have been screamed and sworn at during tough times; and I have been smiled at and hugged at the end of successful projects. However I have always had a high degree of mutual respect and trust with my stakeholders (yes, even those that screamed at me). Why is this? I believe it's due to the manner in which I have approached each of those stakeholders – be it the CIO at a multi-billion dollar international client, or the business manager of a small company. I hope you find these 5 keys to be beneficial to you in helping you to navigate the muddy waters of stakeholder management.

Are the right people around the table?

Often times, the biggest challenge with stakeholder management is the fact that key stakeholders are absent from the table. A stakeholder is defined as "any individual, group or organization that can affect, be affected, or perceived to be affected by a project". As project managers, we often come into projects well after the "stakeholders" are identified. But it is our job, and responsibility, to ensure that the right people have a seat at the table. Leave no stone unturned. Understand what systems and business processes are impacted by your project – and get out and talk to the business and other IT teams. Find out how they may or may not be impacted. If there are impacts, talk to key personnel and encourage them to sign up to be involved as a project stakeholder. Even if they refuse to come to meetings and be actively involved, they will be impacted and it is your job to ensure they understand what that impact is.

Okay I'm a Stakeholder. Now What?

Signing up to be a stakeholder is easy. But do your stakeholders understand what that really means? Everyone must understand their role and responsibilities; and understand what accountability they will hold on the project. Will they own providing requirements for how the new system will work? Will they own providing timely answers to outstanding questions? And if so, how timely must they be in decisions? Will they own ensuring resources from their area are responsible for completing tasks? Do they know the timelines for these tasks? Mutual understanding of accountability and ownership is key in stakeholder management. Make sure everyone has a clearly defined definition of their role and are accountable for the items that they own.

What's their Agenda?

Everyone on a project has an agenda, whether it be personal or for the benefit of the entire organization. Getting to know your stakeholders and what is important to them is critical to ensure alignment and buy-in from everyone. Get to know what makes them tick – what is important to them? What is their current opinion of the project? Are they in it to see it succeed or are they a naysayer who will do everything their power to try to make the project fail? Understanding each stakeholder and their attitude towards the project is key. Conduct stakeholder analysis to understand each person's expectations and how they define success on the project. And use this information to refine the project purpose and goals. Make sure that the project is a "W" for all stakeholders – not just the ones that scream the loudest.

We're all in this together.

Teamwork. Where would we be without it? To build teamwork you need trust and loyalty. Getting this from a group of stakeholders that have different agendas is challenging. But much like herding cats, the project manager must ensure that all stakeholders are in alignment and working towards the common goal. To do so you must build an environment of open communication, one where all stakeholders have the opportunity to speak up and provide input. Make sure the common goal is well known – and that everyone is striving to ensure it is met.

Do you hear what I hear?

Transparency. This is the 5th and final key in stakeholder management. A project manager that is dishonest, not forthcoming and hides key information, from any or all of the stakeholders, results in a disaster. Share with all stakeholders the same information, in a timely manner. Do not shelter certain stakeholders from details because you are fearful of their response, or because you don't want their opinions and thoughts. If all stakeholders have the same information and knowledge, they can work together to resolve any project issue.

Follow these simple keys and the results will be astonishing. You'll find better alignment in project scope, less personal agendas, and more collaboration in ensuring that the project goals are met. After all, if your stakeholders are in alignment, the rest of your team will follow suit and you'll all be high-fiving it at the finish line!

Hybrid vs. Native - What Should Be Your Mobile Strategy and Why?

By Keith Wedinger

Since joining Leading EDJE in February of 2012, I have been involved in several mobile app development opportunities both from a sales perspective and from a software architect perspective. One of the first questions that usually comes up from the client is this. Should I use hybrid or native to develop my mobile app? Before I answer this question, let's briefly go over what each of these choices are.

The hybrid approach generally means using web based technologies like HTML5, CSS and JavaScript to develop a web app that is then packaged and delivered as a native app deliverable using PhoneGap. To the user, it generally looks and operates like any other app installed on their mobile device. The biggest and most trumpeted benefit for this approach is this. Using one code base, one can target multiple platforms (think write once, run anywhere). But there is a tradeoff. Using this approach means that the app is restricted by what the web browser on the targeted device is capable of doing. Specifically, the UI for the app and the app's performance will be constrained by the device's web browser. On iOS devices, this is less of an issue because its web browser generally supports the latest web standards and performs well. On Android devices, this constraint significantly depends upon the Android version on that device. Versions older than Ice Cream Sandwich have browsers that typically do not perform well and contain several web rendering anomalies. Browsers in new Android versions perform better and contain fewer web rendering anomalies.

The native approach means using the development stack for that platform to develop a native app. For example, to develop a native iOS app means using Xcode, the iOS SDK, Objective C or Swift on a Mac and delivering the app to iOS only. The biggest benefits for this approach are performance and end user experience. The app is only constrained by what is possible within that platform's SDK and by the capabilities of the device. The biggest drawback often called out is this - When one chooses this approach, one must develop and deliver a completely separate app for each targeted platform. This increases the time and cost of developing and delivering an app by a factor of at least 1 for each targeted platform. There are development tools and platforms available to help mitigate this drawback and attempt to leverage skills you already have in house. For example, Xamarin is a C#/.NET framework based development platform that allows one to develop native mobile apps with a significant level of code reuse across the targeted platforms. Please note that this reuse is typically 60-70%.

Now, let's get back to our question. What approach should one use? Well, the answer is "it depends" and here is why. Before one decides on an approach, one must carefully consider the following.

Requirements. What business problem are you trying to solve? What benefits will the mobile app bring to your customers or to your enterprise? What data and/or systems will your app integrate with? Simply stating "I/We need a mobile app" is not good enough. Understand what you are going to develop and deliver and why.

Define your targeted user base. What devices are they using? What platforms and versions are they using? What screen sizes, resolutions and orientations do you need to support? Each variation can increase UX development time by 25-50% and UX testing time by 100%. If your targeted user based is using only one platform (examples: iOS, Android, Windows Phone), then the key benefit mentioned above for the hybrid approach may not be a factor.

Know your team. What skills can you leverage? Consider languages, platforms, UX design and QA when answering this question. Also know your existing code base? What can you reuse? Then map these skills to the skills and your existing code base to what is required for each approach

Understand the costs. This is independent of the approach one choses. For each device being targeted, a real device must be purchased for testing purposes. For each platform being targeted, one or more development workstations supporting that platform must be purchased. For example, targeting iOS requires a Mac to build and deliver the app.

After you carefully consider what is outlined above, then you are ready to make a knowledgeable mobile strategy decision. Also consider developing prototypes to help make and verify your decision. Avoid the "I have a hammer and everything looks like a nail" decision. One approach is definitely not ideally suited for all solutions.

Solving a Leaky Basement with Raspberry Pi

By Joseph Beard

When my wife and I bought our first house a few years ago, we thought we had everything figured out. We moved in mid-September and, after an initial issue with the heat pump, everything went smoothly. But we discovered just how quickly that can change when our basement flooded the next Spring.

After the initial panic wore off, I discovered that the sump pump had failed, allowing the water level to rise and overflow the crock. I replaced the sump pump and assumed that all would be fine. A few months later, however, I arrived home to another soggy basement. This time, the sump pump had shaken itself against the side of the crock and had trapped the float switch into the off position.

We were fortunate both times in that I happened to catch the flood as it was starting, which allowed me to save the carpet and many of our belongings from ruin. While I took measures to prevent the sump pump from moving out of place, I knew that it was only a matter of time before it somehow failed again.

I needed a way to be alerted of an impending disaster before it happened. My first step lead me to a simple water level alarm like this one. This saved us from two more potential incidents, but it comes with a critical flaw: it is only effective if someone is around to hear it. If we are on vacation or even just out for the day, the basement may still flood and no one would know until it was too late. Since I always have my phone, I wanted something that could send an alert, via SMS or email, if something was going wrong. This would allow me to call a friend or neighbor to check on things and save the day even if I am across the country.

I played with a few ideas with an Arduino, but I was never satisfied with the networking options available to the platform. When the Raspberry Pi was announced, I knew I had finally found the device that I needed: a small, extremely low-powered Linux system with a full suite of the standard tools. In other words, it was cheap and reliable. I immediately preordered and (eventually) received one of the first shipment Model B devices.

Raspberry Pi

The Milone Tech eTape Continuous Fluid Level Sensor is a printed, solid-state sensor with a resistance that varies in accordance with the level of the liquid in which it is immersed. No moving parts to get stuck! I used the MCP3008 Analog to Digital Converter and a custom differential amplifier circuit to interface the analog sensor output with the General Purpose Input and Output pins on the Raspberry Pi.

Milone eTape Sensor

I wrote a simple Python script to periodically poll the current value of the eTape sensor. Since the output value from the ADC is a 10-bit integer (i.e.: between 0 and 1023), this was an appropriate place to convert the value into a depth (in inches). This script publishes the readings to a ZeroMQ topic.

Another python script subscribes to the topic. When the value exceeds a threshold, it sends an email to my wife and I alerting us of the issue. It follows up with another email when the value returns to normal.

A third script subscribes to the topic and archives the readings into a data file. I would like to use this data in the future to enhance the alerting capabilities. For example, if the sensor value swings wildly or remains relatively static for an unusually long period of time, it may indicate a malfunction. For now, this script merely collects the data.

Finally, because no sensor project is complete without charts, a final script subscribes to the topic and forwards all of the sensor readings to Xively. Xively provides a simple way for me to view the current water level and chart recent values from anywhere. The graph below shows a recent 24 hour period of readings taken.

water level chart

Ironically, there has not been a single high-water event since building and installing this system, but knowing that this system is in place and functioning gives me significant peace of mind whenever I travel.

Java NIO and Netty

By Andrew May

The java.nio package was added to the Java Development Kit in version 1.4 in 2002. I remember reading about it at the time, finding it both interesting and a little intimidating, and went on to largely ignore the package for the next 12 years. Tomcat 6 was released at the end of 2006 and contained an NIO connector, but with little or no advice about when you might want to use it in preference to the default HTTP connector, I shyed away from using it.

So what is NIO anyway? It appears that it officially stands for "New Input/Output," but the functionality added in Java 1.4 was primarily focused on Non-blocking Input/Output and that's what we're interested in.

In Java 1.7, NIO.2 was added containing the java.nio.file package that tries to replace parts of and the "New" monikor makes more sense, but NIO.2 has little to do with what was added in NIO. So it's another Java naming triumph.

The traditional I/O APIs (e.g., InputStream/OutputStream) block the current thread when reading or writing, and if they're slow or perhaps blocked on the other end then many threads can end up unable to proceed - this is how your web application grinds to a halt when you have a database deadlock and all 100 connections in your connection pool are allocated. Each thread can only support a single stream of communication and can't do anything else while waiting.

For a servlet container like Tomcat, this traditional blocking I/O model requires a separate thread for each concurrent client connection, and if you have a large number of users, or the connections use HTTP keep alive, this can consume a large number of threads on the server. Threads consume memory (each thread has a stack), may be limited by the OS (e.g., ulimit on Linux) and there is generally some overhead in context switching between threads especially if running on a small number of CPU cores.

I still find the Non-blocking I/O support in the JDK to be somewhat intimidating, which is why it's fortunate that we have frameworks like Netty where someone else has already done the hard work for us. I recently used Netty to build a server that communicates with thousands of concurrently connected clients using a custom binary protocol. Out of the box Netty also has support for common protocols such as HTTP and Google Protobuf, but it makes it easy to build custom protocols as well.

At its core is the concept of a Channel and its associated ChannelPipeline. The pipeline is built up of a number of ChannelHandlers that may handle inbound and/or outbound messages. The handlers have great flexibility in what they do with the messages, and how you arrange your pipeline is also up to you. You may also dynamically rearrange the pipeline based upon the messages you receive. It's similar in some ways to Servlet Filters but a lot more dynamic and flexible.

Netty manages a pool of threads in an EventLoopGroup that has a default size of twice the number of available CPU cores. When a connection is made and a channel created, it is associated with one of these threads. Each time a new message is received or sent for this channel it will use the same thread. To use Netty efficiently you should not perform any blocking I/O (e.g., JDBC) within one of these threads. You can create separate EventLookGroups for I/O bound processing or use standard Java utilities for running tasks in separate threads.

The API assumes asynchronicity; for example writing a message returns a ChannelFuture. This is similar to a java.util.concurrent.Future, but with extra functionality including the ability to add a listener that will be called when the future completes.

                    channel.writeAndFlush(message).addListener(new ChannelFutureListener() {
                    public void operationComplete(ChannelFuture future) throws Exception {
                            if(future.isSuccess()) {
                            } else {

Netty is under active development and in use at a number of large companies most notably Twitter. There's a book in the works but the documentation is generally good and the API is fairly straightforward to use. I've found it a pleasure to use and would recommend it for projects that require large numbers of concurrent connections.

Using the Decorator Pattern

By Nathan Kellermier

The decorator pattern is used to extend the functionality of an object, similar to inheritance. What sets the decorator pattern apart is the pattern can be used to dynamically extend the functionality of an object without requiring all instances of that object to include the extended functionality. In this way, the functionality can be added or removed at run-time based on user interactions or as the result of a business rule.

While examining the decorator pattern we see it consists of an interface, a concrete implementation, an abstract implementation forming the decorator base, and the actual decorator classes. The goal of the pattern is to provide the ability to wrap the concrete implementation with the decorator classes and provide new and/or differing functionality from the original object.

Looking at an example where there is a MessageProvider class containing a single method that returns a message, we will see how the decorator can be used .

First , we need an interface and concrete implementation that returns the message passed to the constructor.

                        public interface IMessageProvider
                            string GetMessage();
                        public class MessageProvider : IMessageProvider
                            private string _message;

                            public MessageProvider(string message)
                                _message = message;

                            public string GetMessage()
                                return _message;

With the concrete classes in place, we can create the decorator hierarchy to extend the functionality of an IMessageProvider class. We will create two decorators, the first inserts text before the message in the MessageProvider, and the second simulates logging to the console by writing out an IMessageProvider's message before returning the message to the caller. The decorator hierarchy consists of an abstract base class specifying that an instance implementing IMessageProvider (described above) needs to be passed into the constructor. The IMessageProvider instance is then stored in a variable and a base implementation is created that acts as a pass-through to the stored IMessageProvider.

                    public abstract class MessageProviderDecoratorBase : IMessageProvider
                        protected IMessageProvider _messageProvider;

                        protected MessageProviderDecoratorBase(IMessageProvider messageProvider)
                            _messageProvider = messageProvider;

                        public virtual string GetMessage()
                            return _messageProvider.GetMessage();

Having created the decorator base class, we can look at the first of the decorators. The GreetingMessageDecorator is a simple decorator that takes in an IMessageProvider and adds a simple greeting to the message returned from calling GetMessage. The purpose of this decorator is to demonstrate how the decorator can add functionality to an object that implements IMessageProvider.

                public class GreetingMessageDecorator : MessageProviderDecoratorBase
                    public GreetingMessageDecorator(IMessageProvider messageProvider) : base(messageProvider)
                    { }

                    public override string GetMessage()
                        return "Hello, your message is: " + base.GetMessage();

The second decorator in the sample application is the ConsoleLogMessageDecorator. This decorator performs an action on the IMessageProvider it is was provided during construction before eventually returning the result of the GetMessage method to the caller. The action taken is a simulated log of GetMessage's result written to the console. As simple as this action is, it could be extended to a timer wrapping the call to the IMessageProvider's method, special handling, or a before/after type of action.

                public class ConsoleLogMessageDecorator : MessageProviderDecoratorBase
                    public ConsoleLogMessageDecorator(IMessageProvider messageProvider) : base(messageProvider)
                    { }

                    public override string GetMessage()
                        string message = base.GetMessage();

                        Console.WriteLine(Environment.NewLine + "LOG: {0}" + Environment.NewLine, message);

                        return message;

Finally, we have a simple driver application. The driver builds up a MessageProvider and then creates the decorators, displaying the result of the GetMessage call in each version. Note, the ConsoleLogMessageDecorator takes as input the GreetingMessageDecorator, prints it to the screen as a log message, and returns the chained results of each decorator performing it's action. The output demonstrates how the decorators have affected/interacted with the object being decorated, and, in the case of the ConsoleLogMessageDecorator, how chaining decorators works.

                class Program
                    static void Main(string[] args)
                        IMessageProvider messageProvider = new MessageProvider("Sample message");


                        IMessageProvider greeting = new GreetingMessageDecorator(messageProvider);


                        IMessageProvider logDecorator = new ConsoleLogMessageDecorator(greeting);



code output

Although the example presented is simple, the decorator can be used to implement complex before/ after actions by chaining operations in multiple decorators. One could add timing, logging, and transactions to the same object simply by creating the appropriate decorators and adding them in a chain at runtime. A business rule could determine that a special row needs inserted in a database marking an entity in some way and a decorator adding the needed functionality could be added to the object when the rule is triggered, changing the course of action for the object in question.

The source code for this article can be cloned from Github here for use under the MIT license.

Project Management 101: Managing Client Expectations

By Michael Zachman

Here's the situation: You are the project manager on a project and all seems to be going well. You are about halfway done with the project and are on schedule and on budget according to your plan. It is time for a steering committee meeting with the client and you are ready. During the meeting, you provide a walk-through of the application and its capabilities when the client stops you and says in an irritating and frustrated voice, "This is not what we asked for and does not meet our requirements!"

This can be a very frustrating scenario but happens more often than you think. The good news is that there are steps that you can take to mitigate your project risks and make sure that the client is on the same page with you and getting what they want. Here are three (3) things you can do to help you manage your project more effective and efficiently:

  1. Set Expectations Early and Often – From the day you step into the client and begin managing the project, you need to ensure that you are setting expectations with the client based on the current project parameters (budget, schedule, resources, vendors, etc.). As project parameters change, be sure that you are updating the client on how this impacts the project. Whether it is good news or bad news, they have hired your expertise and will appreciate you giving it to them straight.
  2. Document Everything – No matter how trivial or how complex it seems, you need to document everything. Take good notes (I highly recommend Microsoft One Note) during meetings and phone calls, and keep all project emails organized so that you can refer to them later. Project documentation is essential at all stages of the project and you need to get signoffs/approvals from the client to ensure that they are in agreement with what is being produced for them. This does not always mean that things will not change but when you have a record of what they agreed to it is difficult for them to argue about the impact of any changes to the project.
  3. Communicate, communicate, communicate – You have heard with real estate that it is all about location, location, location. Well I say with project management that it is all about communication, communication, communication. There is no such thing as over- communication on a project (or in most areas of life!). If you are communicating to your client audiences appropriately and consistently, there will be less of a chance for misunderstandings and more of a chance for a smoothly run project.

So, to stay away from being blind-sided by misguided or unrealistic expectations, try using the 3 ideas above to manage the client's expectations to a successful project outcome that everyone wants to achieve. I think you will find that there will be less confusion, more understanding and in the end, a client who will be pleased with the results and thanking you for a job well done!