Web Dev: Prototyping vs Wireframes

There was a very cool discussion over at the NYC CTO School Meetup Discussion Board (is that the name? The branding is all over the place there).   Name aside; it was one of the more interesting discussions about moving away from PSD/Wireframe look and feel development as a first step and onto prototyping with javascript/node.js.  The goal of this was to iterate quicker.   I’ve pasted some highlights of the conversation below.

Original Message: 

Jean: “Trying to increase ability to iterate on our front end designs, and get away from working with PSDs / Wireframes to prototyping in HTML / JavaScript directly.

Do most people just use straight HTML / JS, or do you use frameworks, such as JQuery, Backbone, Angular.js, or Ember, or templating languages such as mustache, etc?   Especially considering using Angular for this on top of Twitter Bootstrap.
Also, on the server side, do you typically also make up a minimal server component, or usually just hard-code the screens with the content? “
I’ve quoted some of the more interesting responses from the discussion here.   These are not responses by me, just a summation of the responses by participants.   You can hit the CTO School Meetup Discussion board for more information:
  • “We use Angular / bootstrap for rapid UI prototyping. Angular is nice in the way you can very easily isolate the backend stuff that you can later plugin whenever its ready. “
  • “1) Feature and experience definition: wireframes linked to a flow chart
    2) Clickable prototype:
    – backbone, d3, bootstrap plugins
    – standard css rototyping library – layout icons etc… – bootstrap works too – depends upon skinning constraints
    – data in standalone same json feed files
    – node server with 2 end points – one for rendering html, second for forwarding to APIs etc.. as needed”
  • “/….All that said, prototyping design work directly in HTML / CSS is a pain. I’m rarely happy with design work that is done in-browser. It’s a limiting environment, and instead of working around it to make the design work, the limitations tend to hold back the design.”
  • “I would suggest Angular / Bootstrap on top of a light weight server running Node would be ideal. From there your front-end can stub out the JSON APIs it would expect from the backend, so when the server is ready you just need to redirect the front end to the true backend server endpoint. This is the direction we are moving towards, we just got our first few pages setup with Angular for our current release.  Angular’s use of HTML-based partials is really convenient as well.”
  • “As Josh said, just write your html files in a sandbox folder, and include whatever css and js libraries (bootstrap, font awesome, jquery, etc.) you need from CDNs or the same folder.

    Then, when you want to see how the html would work through a web server, you can use Python’s Simple HTTP Server — — from a command prompt.”

  • “If you’re looking to iterate faster on your framework-backed front-end designs, you might want to focus as much attention on your workflow as you do on picking the right framework, be it Angular, Ember, or whatever. You should have a look at Yeoman (, as it supports just about every front-end framework out there, allows you to mix/match components easily, and effectively manage your prototyping toolchains.”

Dealing with long running feature branches and the resulting merge-hell

I want to get my thoughts down on lessons learned from a crazy merge of a long running feature branch that I recently had to do while they are fresh in my head.  There are a few points I wanted to make and I wanted to invite you all to add on or tweak things.

Background: I tasked two developers with merging in 3 months of code from two branches.  Each branch had little to do with each other and diverged significantly.  The developers approached this by branching Master at the tip and merging in the feature branch.  There was also a discovery of a mistaken merge into master from the source branch that further complicated things and pulled technical leadership/experts into help fix it which further limited the amount of oversight on the merge process.  Given the crazy amount of work, the devs relied on discarding changes from particular places in the master branch since there was no domain expert on hand from that branch and there was limited technical oversight throughout the process.  In addition, the standup meetings had moved away from status updates to pure roadblock resolutions so there was no insight into the details a particular dev was struggling with.

Result: When the merge was complete we ended up with code where many areas of the domain additions from the master branch would not work as well as massive incongruities on the front end code.

So here are some lessons learned.  Granted, most of these have been worked out by the industry by some sort of process and I know these aren’t my original thoughts but I figured it would help your team out when they make decisions around these issues the next time.

  • If we’re ever in a situation where we end up having a long running feature branch that doesn’t get merged into master, it would be good for us to give the following direction to the devs.
    • Attempt to merge in smaller sets of code from each branch rather than bite off the whole thing.  This means take 1 week or 1 month increments from each branch if a clear delineation can be made.  The time lines of the commits don’t necessarily have to add up.  This allows for smaller bites to be taken between the two commits.  Encouraging the developers to take this approach would give them a strong sense of agency in the code that they are merging rather than having to fall back on faith.
    • Use the power of git.  In git the saying is “merges are easy”.  Using the rebase feature of git might have been a more effective mechanism than the cvs approach that I suggested to the developers.
      • The rebase in a long running merge combined with divide and conquer approach looks like the following.
        • Take a branch of the last known merge point on the source branch and check out to that branch.
        • Define two commits somewhere a couple of weeks ahead (Again taking domain situations into account)  in the source and target branch.
        • Interactively weave the commits together with manual conflict resolution
      • This results in a clean time line on the merge branch as well as merged code and greater developer confidence on the code, it’s effectively the same as the merge but forcing the developer to consider the histories of the code in small commit chunks one by one.
      • This also gives us much more power to backout change
    • If at all possible, consider building a dev team that consists of 1 developer from the source branch and 1 developer from the target branch.  Combining knowledge in this way gives the devs the ability to work out issues rather than hope things will just work out after the merge.    The scope of the merge in this long running case tends to require two developers.
    • Since these merges take days, a daily 15 minute standup with technical leadership should be setup and the developers should answer the following questions
      • Daily Standup Questions
        • What was completed yesterday?
        • What was completed today?
        • What issues came up in the merge today that resulted in confusion and require roadblock assistance.
      • This can give the technical leadership insight onto problems ahead of time rather than the traditional wait till it’s all merged.
    • Capture all of your merge domain issues that come up and are not immediately resolvable in your bug/issue tracking system.  You’re guaranteed to forget about the details of each thing so make sure you do yourself a favor and get them into your system.
    • If we can forsee a long running feature branch ahead of time then it would be good to favor some of the following processes.
      • Creation of a _merge branch and weekly, if not daily merges between the branches.
        • Identification of one or two developers (cross domain if possible) to perform this merge and provide status in the standups so that technical leaders can resolve any domain issues that pop up immediately in the source or target branches.
      • Creation of a –merge environment that QA can use to test code and look for domain issues.
      • Daily building of the _merge branch via CI.
    • Start to  encourage the developers to rebase and craft their code locally using git so that commits reference features.
      • There are a few occasions where commits in one branch need to be excluded or reconsidered because of domain conflicts in another branch.  Having resolution on these features will allow us to quickly execute on excluding or including features, or setting them to the side for a later release.
        • Currently a feature is crafted with multiple commits over time and pushed out to origin.  This makes it hard for developers and leaders to cherry pick features using “the git”.
      • Developers who are transitioning from the CVS or SVN way of doing things aren’t going to be aware on how to use git to make this work for them so this is going to come with time and experience.

    git gc issues and running out of memory

    I was recently running into some issues where I had a massive git repo (12G).   It was originally 2G.  Since this cloned repo is used by the continuous integration system, I end up doing a lot of git checkout — . on the repo in order to get it back to a state where any of the changes I made to build it are removed.  However, this constant head switching caused some cruft to accumulate over the past 2 months and now we’re in a spot where the repo is just way to massive.


    So I attempted to run git gc on the repository but ended up with the following issue.


    [tomcat@aa-cruisecontrol starter_relate_int]$ git gc
    Counting objects: 160591, done.
    Delta compression using up to 4 threads.
    fatal: Out of memory? mmap failed: Cannot allocate memoryerror: failed to run repack
    [tomcat@aa-cruisecontrol starter_relate_int]$

    After attempting a couple more attempts I still could not repack it.   So I ended up blowing it away and recloning it.  This reduced the size of the git repository to 2G.  Draconinan yes, but I am glad to have the space.



    Installing Ruby On Rails and the dreaded Error Message

    If you attempt to install rails via the standard

    yo@yo:/ sudo gem install rails

    And you get the dreaded message:

    ERROR:  could not find rails locally or in a repository

    There is a simple cure.  Simply install the new gem (1.3.1) package from source and try again.  Apparently 1.1.1 and 1.2.1 get the connection to the mother ship snipped some how.   I tried and failed and tried and failed to install Rails via gem until i reinstalled the new gem package.

    Keep in mind this is a full install.  You’ll have to get the package as follows


    unzip and untar and build the sources via

    sudo ruby setup.rb




    Getting rid of life-spam

    I have a suggestion for you.


    Get rid of life-spam.  Declutter your email box by ensuring it doesn’t get cluttered at all.

    I realized this week after listening to The 4 Hour Work Week audiobook that I not only spend too much time on the inbox, but I actually HAVE NOTHING TO DO in my inbox.   For example:  Over the past 2 days my Gmail and Emich inbox combined have received over 40 promotional emails from companies that I have done business with in the past.   

    Since I’ve made it a point to de-stuff, I’ve also have to make a point to ignore and delete these promotional messages.  I’m not interested.   I am interested in deals but honestly you can find those same deals by searching google/ebay/yahoo what ever.  These giant time-wasters give you absolutely NO VALUE.  


    Do yourself a favor:  Next time you do your Inbox 0 or Collection process, go ahead and unsubscribe to all of that promotional email.  You’ll be so glad you did!



    Don’t worry if your idea already exists

    Are you the type of person that has lots of ideas and struggles to find time to do any of them?   Then this post isn’t for you.

    This post is for you if you’ve finally found some time to work on that wonderful idea that you’ve had bumping around your head.   You sit down and start to do some research on the topic.   You know what you need to get it done.   Your laundry is set and your girlfriend/wife/husband/boyfriend is away for the week.  Things are all good.  You open up google and do some searching for the ideas and tools you need to make your idea reality.

    And then you see it.   Someone did it already!

    Don’t sweat it.  You need to continue to work on your idea!

    One of the most disheartening things than can happen to yourself is seeing the fruits of someone elses labor evolve into a working, real implementation of that world-changing idea that you had.   It’s easy to question yourself and think that you lost your chance or you are behind the game.  You might even think it’s time to pack it in and have kids and hope that they succeed at dreams that you failed at.   You might even go so far as to say, oh, sell your things and move to an ashram.  What’s an ashram?

    In anycase you will need to fight off that urge to give up and continue following your dreams.   There is one strong reason for this – innovation.

    While someone else may have already built the thing you are thinking of, you have no idea if your efforts will produce something that is better, faster, stronger, more user-friendly, less power-consuming, more earth-friendly and so on.   Take a look at google.  When google was launched there were already a number of search engines on the market.   Fast forward to just a few years after google was launched and you start to see that google completely dominated the field.   it would have been easy for Sergey Pergey to just say “F it.  I’m going just work for some dude and make my money” but he didn’t.  S and L went about building google into what it is today.

    So if you think that your idea is now useless, or you can’t implement it please think again.   You might just have the improvement on the idea that push the idea to the next level.


    Open the Windows! Why Microsoft should open source Windows.

    Life today is tied together by computers.   Our relationship with companies is driven by billing cycles and algorithms implementing these bills.  Our relationship with peers and family is driven by emails, photo sharing applications and instant messaging.   Much of this software is created by teams of skilled software developers.  Analysts and architects let these developers know what features or changes go into the software based on market research, feedback from users, or pure guesses.  When the decisions are made,  programmers go ahead and create the source code that is later compiled into what many people see as the end result;  the software itself.


    You may have heard of a concept called Open Source software.  Open Source software is defined as software in which the source code is available to the users (FSFE).  The de-facto standard for calling something open source requires that users be able to add functionality that is not already available in the software and compile it.  Users have the option or are required to make their changes available to other users depending on the license granted to them.  Open source software becomes transparent – any changes made are visible to a number of people and there are no secrets in open source software since the code is available to be viewed by anyone.  Any concern that a piece of code is doing something wrong is reviewed by people interested in the software itself, either as hobbyists, users or even corporations who use the software to further their own needs.


    Closed-source software is just the opposite.  End-users may not view the source code for closed source software.  A user has no recourse to add on functionality if he or she chooses.   The user also places the trust of the software in the control of the corporation who created it.  Microsoft Windows is an example of close-source software.  


    The benefits of open source software is obvious to anyone who has used Windows at their mothers house after a virus was downloaded and installed.  Since most open source software is also free, you also benefit financially from not paying for software.  Many of the popular software packages today are open source; Linux for operating systems and most of the file sharing applications are open source.  


    Open source can take an idea from one corporation and improve upon it.  America Online has an Instant Messaging network called AIM.  For years, the only way to get on that network was to download a client from AOL.  This did not operate with other networks until an open source project named GAIM appeared.  Anyone who used GAIM could communicate with AOL, YAHOO and MSN users without having to install separate clients.  It was also free (just as AIM was free).  


    Why then does one of the most recognizable brands not embrace the idea of open source?  For years, Microsoft avoided and rejected calls to open source windows.  Very recently did they unveil licenses which allowed users to review the source code that made up their software but this has been shown to be restrictive and not in the spirit of open source by many advocates.  


    It’s true that the company controls their Intellectual property rights by keeping the software closed.   Open Source software can introduce a lot of legal issues.  For example, if someone introduces code in the stream that is copyrighted or owned by another individual, legal issues arise.  Open source software also tends to follow the “bazaar” model, more akin to design by committee rather than one or two experts.  If someone introduced a feature that is not widely liked , or implements it differently than what the majority of the users prefer, people will create another copy of the project (called “forking”) ad continue development on their own software.


    But drawbacks aside, the clear benefit to open sourcing Windows is to Microsoft itself.  Competition from Linux, another open source OS, is fierce.  Administrators prefer the idea that a community is working on issues and code reviews can be performed.  Many people have a bad taste in their mouth from Microsoft releasing patches every so often that break the software instead of fixing it.    If people were able to fix the issue themselves, the satisfaction of knowing the job is done to their standards takes ahold and Microsoft clearly relieves itself of much of much of the blame it currently receives.


    Microsoft also gains ideas from the community of users who are using their product everyday.   Of course they can continue to hire testers and designers internally and work on the software, but they can not get feedback from the most important people(Raymond 2), the users themselves.  Issues can be solved by the community and the community can take their destiny in their own hands.  Windows has higher quality from having all these extra eyes looking at their software.


    One of the fallacies that plagues open source is software becomes free and therefore unprofitable.  Microsoft can avoid this by only open sourcing certain core features of windows.  They may choose to open source their popular web application server, IIS, or they can open source windows itself.  Given this, the popularity the software currently enjoys can be maintained by removing the current differentiation factor that is typically exclusive to hotter, more revered open source software.  Microsoft can also devote many of the resources that are dedicated to bug fixing and send them to newer, more advanced and lucrative technologies.  The resources are free to become  a profit center rather than a cost center.  These teams, armed with development money from Microsoft can compete with other teams from around the world to implement ideas originating from the Open Source world, improve upon the ideas, or create and invent completely new technologies used by families, companies and individuals.  


    The final benefit is more abstract but important none-the-less.  By opening the source code up to be viewed by others Microsoft grabs a portion of the hacker spirit.  The hacker spirit is the energy that drives us to improve and develop something new without any benefit other than the joy of developing it.  


    Microsoft has made some progress in creating Open Source licenses for it’s software (OSI), but it’s time for the most critical software to be opened up to critical analysis.


    Works Cited


    (FSFE) Free Software Foundation Europe  Accessed 10 Dec 2007. 



    Raymond, Eric S. “The Magic Cauldron” 3.0 25 Aug 2000



    (2) Raymond, Eric S. “The Magic Cauldron” 3.0 25 Aug 2000



    OSI “OSI Approves Microsoft License Submissions” Open Source Initiative

    Accessed 10 Dec 2007 <>


    Writing a Firefox Extension for AIM

    I am learning the wonderful world of Mozilla. Specifically writing an extension for Firefox. There are some really awesome things about this platform. Firefox is not just a great browser, but it’s a really great platform. XPCOM let’s me write javascript code to do sockets and all sorts of wild stuff. I haven’t even scratched the surface yet.

    In any case I’m writing an extension to let me send links to people who are logged into America Online Instant Messenger. AOL just released a ton of developer tools for embedding AIM into your web pages, and a ton of SDK’s but I’ve decided to use the old TOC2 protocol for the project. The reason: I didn’t want to start trying to deploy XPCOM, C++ bindings and all sorts of headaches with my first extension. I want this to be fun, easy, and useful for my first try. Something a little more advanced than Hello, World.

    In any case i’ve spent the last two nights on it and I found an old TOC protocol implemented in javascript and created my first XUL add-in. Next steps are to upgrade the TOC to TOC2 protocol and figure out a way to test it properly without destroying my user account.