Jun 08

We had the chance Tuesday night to demo the Trac JumpBox we just released to a small crowd at Refresh Phoenix. It was a decent showing with about forty people and six demos from local AZ companies that were all doing interesting things. The Trac JumpBox was well-received and a few developers who had set it up previously remarked that it would indeed save them days of tedious work. Erica Lucci did a fantastic summary of the demos and a few others have posted their thoughts. Thanks to Acme Photography for permission to republish the photo below. AZ is still relatively fragmented with pockets of smart people creating innovations in isolation. It’s great to see groups like Refresh bringing together the people that are working on interesting projects. The next demo night is scheduled for October. If you’re in the Phoenix Metro area and have something neat to show off, get in touch with Erica.

RefreshJumpBoxDemo.png

Jun 05

trac_logo.pngWe just released the Trac/SVN Development JumpBox. This is a virtual appliance that integrates the popular SVN source control system with Trac for documentation and issue tracking. We’re excited about this JumpBox because it’s the first one that takes a massively complicated install (a process which can take the experienced developer two days) and reduces it to a 30sec task.

  • If you don’t know what source control is or why you should use it, read this.
  • If you know what source control is but don’t know why you should use Subversion, read this.
  • If you use Subversion currently but don’t see why you should integrate Trac,
  • And if you already use Trac/SVN and don’t understand why you should use JumpBox to skip setup, read this and then read this.
  • As with all JumpBoxes, you can use it perpetually for free if you don’t mind having our navigation and don’t need access to the premium features. During the remainder of the release candidate phase (approx 3wks) you can still unlock the premium features and get shell access and automated backups without paying. With today’s other releases of Drupal and Dokuwiki this brings the total to 9 Open Source applications currently available. If you’re in Phoenix, we’ll be at Refresh Phoenix tonight demo’ing the Trac JumpBox. We’ll be shipping production releases of all applications July 1st. Get ’em while you’re still able to unlock them for free!

    Tagged with:
    Apr 14

    snailsAndButterflies.jpgWhat can we possibly learn from butterflies and snails? The science of Biomimicry posits that we can observe naturally-occurring structures and processes formed through evolution, distill the mechanisms at work and apply the lessons towards solving modern-day problems. There’s an excellent podcast here with Janine Benyus, a woman who has dedicated her life towards extracting insights from things like the cell structures of butterfly wings the calcification mechanism in snail shells and applying them towards development of stealth aircraft and efficient plumbing systems. It stands to reason that with 4BN years of natural selection, evolution already has the answers to some tricky problems.

    There are two fields in the space of Artificial Intelligence that take different approaches to solving problems but incorporate this same concept that we can emulate nature with software to solve complex problems. One attempts to emulate evolution (genetic algorithms) while the other emerging one is now attempting to emulate our brain function (called Hierarchical Temporal Memory Systems).

    I spent the entire day yesterday doing nothing productive for our company but instead reading and listening up on these two fields of study and I’m blown away by the possibilities of each. I can’t possibly do justice to the explanation of either but here’s what I found interesting:

    Genetic Algorithms – synonymous with the term evolutionary machine learning, this field has been around awhile. This amazing podcast with David Fogel explains his work with genetic algorithms in building a checkers program that learned to play at a grandmaster level with zero instruction on what the goal of the game was or how to win. It’s one thing to build a program and give it explicit instructions for how to evaluate moves and choose the best one- it’s an entirely different prospect to leave it untrained, turn it loose and tell it only the number of points it earned after x number of games. They essentially conducted Darwinian evolution with different virtual checker players using a program that ran 6mos on a Pentium II Windows NT machine and it learned to play on its own and ended up beating a grandmaster.

    Granted the game of checkers is well-defined with clear rules and played in a bounded space (ie there’s no random external forces to account for like pieces getting knocked off the board by accident), but the mind-blowing implication of this story is that we no longer have to explicitly program a computer with the methods we know for solving problems, we can simply give it the problem and tell it when it’s doing well.

    Hierarchical Temporal Memory Systems (HTMs) – this emerging field was started by Jeff Hawkins, the guy who created the Palm Pilot and wrote the book On Intelligence. It takes the approach of emulating the structure of the human brain (specifically the neocortex) with the idea that “our brains do things like facial recognition extremely well while the most powerful traditional computers cannot so there must be something to learn.” It breaks learning down to the idea that it’s based on patterns perceived in close temporal contiguity (back to back) and this idea that the storage mechanism consists of these hierarchical nodes that store different aspects of complex information at varying levels. For instance a computer running the Numenta software that’s hooked up to a camera and shown 100 instances of various dogs will begin to know the characteristics that make a dog (fur, four legs, tail, typical shape, etc). Information about the fur and the angles of the body are stored at the lowest nodes while concepts of body parts and the holistic notion of the essence of what makes a dog a dog, at the higher nodes. I haven’t gotten my head entirely around this concept but there’s an awesome six-page article here by Hawkins that explains this stuff and if you’re brave enough, there’s a longer white paper on his company site here that digs even deeper.

    They have an interesting business proposition in that they have released the HTM software and API’s free for people to use as the engine for powering apps like facial recognition and intelligent network routing programs. They’re basically selling the raw “neocortex computing fabric” that anyone can use to train for any application. Combine this with the idea of utility computing and running your app on a virtual infrastructure like Amazon EC2 and S3 and you have the ability as a high school kid to build your own supercomputer with any world-changing application you can dream up.

    In my high school physics we built solar cars out of balsa wood and tried to see who could build the fastest one- it was fun but in reality the skills we learned from those experiments had limited applicability. The forward-thinking high school physics teachers of today should be ditching the balsa wood and training their students on the concepts of genetic algorithms and HTMs and using them to conduct the equivalent experiments of the solar car challenge only with real problems. These concepts if grokked by youngsters now can be applied to solving hairy problems that will undoubtedly confront us in the next generation. Pollution, environmental change, contagious disease epidemics, harmful drug interactions, security threats, energy and vehicular traffic routing, diminishing energy resources- for all the daunting problems that could wipe out civilization, there’s extremely promising problem solving tactics emerging. Whether profit or non-profit, the important companies of tomorrow will be the ones that learn to capitalize on these technologies. Let’s hope that the high school physics teachers out there are listening and realize how important they are. My mother is 3wks away from finishing a 40yr high school teaching career – for they pay and the beauracracy that public education teachers put up with, I have to imagine the ones that stick it out are the ones that realize the value and ripple effects of what they do. And let’s hope that our Federal government too will recognize the importance of high schoolers becoming excited about this stuff now and start to prioritize expenditures accordingly.

    Mar 16

    Server virtualization is sweeping the IT industry- that’s no secret. But what does it mean to the average person that’s not an IT admin? We deal heavily with various virtualization technologies every day as it’s a key enabler that underlies the JumpBox platform. I wanted to take a stab at breaking things down in benefits terms vs. feature terms so the people without pocket protectors can understand some of the implications of this stuff.

    First off, my partner writes a blog on virtualization – if you’re tech-savvy you’ll find the synopsis here remedial. Head on over to VirtualizationDaily.com for the more in-depth discussion for IT admins and CIO/CTO’s. This is the big-picture overview that will attempt to explain the benefits to a non-tech user.

    What is virtualization?

    The useful definition in this context is: the ability to run an entire instance of a computer in software.

    Judging from the surveys people complete when they download a JumpBox from our site, most people are currently using virtualization for testing and evaluation purposes. Usually when you think of installing an application like Quickbooks or Office on your desktop, you get an installer that’s specific to your operating system and go through a wizard that sets up the application and its dependencies into a directory on your computer. It runs in the context of your operating system and generally has access to do potentially destructive things to your computer like writing to the filesystem (or making a bunch of registry entries if you’re on a PC). When you run a virtual machine it’s different.

    Virtual Machines (or VM’s) are complete instances of a computer with their own complete operating system. It’s a little Malkovich-malkovich to think about running one computer inside another but essentially that’s what you’re doing when you use virtualization. A quick distinction needs to be made here- emulation does not equal virtualization. People that got a bad taste from using programs like Virtual PC and then later switched to VMware will testify to the performance improvements of using a virtualized environment over an emulated one. The goal of abstracting away an OS from its underlying hardware is the same but the means for doing so is different – if you want to read more about the difference, go nuts. Back to VM’s though…

    So that’s nice that this capability exists to run one computer inside another, but why on earth would someone want to do it?

    Implications of virtualization

    For server applications:

    1. Speed of setup – there is no more install process when using preconfigured VM’s for testing. You can download a virtual computer configured with an app and just turn it on and having working immediately without any setup. Setup processes that used to take anywhere from one hour to one day are now completely reduced to the time it takes to download a VM.
    2. Efficiency – the average server runs way under capacity, let’s say for the sake of argument- 8% CPU utlization.  By virtualizing the server and running multiple VM’s on the same physical machine you squeeze more efficiency out of your existing hardware. That means less space requirements in your datacenter, less power usage, fewer servers to update and service.  This is the reason that California’s largest power company announced a 50% rebate to any ISP’s who virtualize their infrastructure.
    3. Containment – let’s say your evaluating five open source software applications to see which most closely meets your needs. Traditionally you’d have to install them locally in your computer’s OS and risk hosing something and hope that 4/5 have a good uninstaller when you’re done so you don’t end up with a bunch of clutter on your system. VM’s run completely self-contained so you can try out an app and then throw it away and know that your system is still pristine.
    4. Known setup state – there’s no opportunity to screw up the setup since that portion is removed. You’re using a freeze-dried application that was presumably configured correctly the first time and then turned into a VM.
    5. Portability hence low entry cost – most people don’t want to purchase an expensive server up front on which to run an application. They’d rather serve it from a crappy box until they know it’s popular enough to merit purchasing more expensive hardware. Under the tradtional “bare-metal” install, you evaluate it on a test machine and then install the production version on the production machine. Any data you entered in the test version needs to be migrated into the production version or else it’s lost. Because VM’s however are agnostic of the underlying hardware, you can start off serving an application from your laptop and then picku it up at any time and move it to a fast server with no painful migration process or worry of driver or dependency conflicts.
    6. Support – have you ever hosed a system so badly that you had to send it to someone for repair? Think about using a virtual computer and the notion that you can put the entire computer on a DVD or up on an ftp server and have someone fix it and send you the disc without ever shipping the hardware?

    For the desktop user (all the above plus…):

    1. Choice – you have the option of using a completely different OS as your desktop environment yet being able to run applications that only work in other OS’s. You could use Mac OS X or Ubuntu Linux and run WinXP virtually for those applications for which there are no acceptable substitutes in the alternative OS. Or continue to work in Windows and have access to run Linux apps.
    2. Productivity – when your virtual XP instance is crashing, you can still be productive working in your base desktop environment ;-)
    3. Protection – Using VM’s you can do the latest upgrades to applications without fear of nuking your OS because if everything breaks, you can just roll the VM back to an earlier version.

    There are probably many others but these are the salient ones that come to mind. If you haven’t already tried running a virtual machine, it takes five minutes to do and will change how you think about a computer. For a Mac, get either the Parallels trial or the VMfusion beta. On PC, download the free VMware player. Then either visit VMware’s directory of free VM’s or (shameless plug) download one of the JumpBoxes from our site. We currently have a blog, a wiki, a CRM and a discussion forums JumpBox available. The way in which we’ve packaged our VM’s is specifically known as a “virtual appliance” since you don’t ever need to see the underlying OS or how the application works- everything is configurable via a web interface.

    All kinds of interesting possiblities exist around new types of hosting opportunities and support services with applications that are packaged in this way. As you might guess, this is why we are so excited about JumpBox. Have fun with it and if you are heavily into virtualization, join the ongoing dialogue over on Virtualization Daily.

    Tagged with:
    Feb 22

    If there’s one concept you learn this month that has the single greatest potential to improve the profitability of your site, the power of DIY multivariate analysis using the Web Optimizer is it. Let me explain.

    What is Multivariate analysis and why should you care?

    Multivariate analysis in the context of web sites is the science of changing elements on a page and studying the effect they have on your visitors’ behavior. If you have a web site, presumably you already have a goal and your site facilitates a behavior from visitors that contributes towards achieving that goal. There is likely a desired outcome you’re seeking on each visit – an action you want that person on the other side of the wire to take such as filling out a contact form or purchasing a product. This desired outcome is known as a conversion.

    Improving your conversion ratio even one percent can lead to massive improvements in sales and profitability. This calculator is a simple way to run some what-if scenarios given your current order size, traffic and sales numbers. The easiest way to understand the benefit of improved conversion is to think about it as “miles per gallon” on a vehicle- think how much gas you would save if you doubled the fuel efficiency of your engine? But it’s even better with web traffic. If your current cost per acquisition for a customer is $5 per customer given all your fixed design/development/hosting costs and marginal costs like advertising, converting twice as many visitors with zero additional cost can bring your cpa down to around a dollar. This has a dramatic effect on profitability of your operation – the effect on profitability is non-linear especially if you feed the savings back into targeted promotion.

    How GWO works

    So now that you understand the value of improving conversion, let’s talk about how GWO specifically does it. Google Web Optimizer is javascript-based multivariate analysis tool that gives you the ability to test different versions of key pages on your site to determine the winning formula that produces the highest conversion. You set up experiments and GWO will dynamically serve different flavors of the same page randomly to different visitors and record the number of resulting conversions. The empirical data is then presented in a graph like the one below. Provided you have enough traffic to produce significant results, the tool reveals the winning combination along with the confidence level of the suggestion (ie. the statistical significance).

    GWOExperimentResultsSM.gif

    You can see that Graphic #4 outperformed the others and crushed the original graphic by almost double.

    And the winner is…

    So this is all nice in theory but let’s take a look at a concrete example of how this helped us refine our messaging on the JumpBox site. A week ago I set up GWO on the JumpBox homepage and tested five different versions of the main graphic. Here’s the five versions I tested:

    GWO_layoutE.gif GWO_layoutD.gif GWO_layoutC.gif GWO_layoutB.gif GWO_layoutA.gif

    Can you guess which one performed the best?

    (scroll down for the answer)

    spacer.gif

    (keep going…)

    spacer.gif(wait for it…)

    spacer.gif(waiiiit for it….)

    spacer.gif
    winningGraphic.jpg

    This version converted at a rate of 21.2% – double that of the original which performed at 11.3%. The breakdown for all is as follows:

    GWO_layoutE.gif GWO_layoutD.gif GWO_layoutC.gif GWO_layoutB.gif GWO_layoutA.gif
    Combo 2
    16.1%
    Combo 3
    11.6%
    Combo 1
    18.7%
    Combo 4
    21.2%
    Original
    11.3%

    Personally I thought gradients were unfashionable in ’96 but that just proves that conversion is not necessarily about spectacular design. The designers who create spiffy pages and defend their effectiveness from a design standpoint may be completely missing the boat in terms of the effectiveness of the design in converting traffic. Nobody can argue with real numbers from your own visitors – at that point the winning choice is no longer speculation, there is a right answer as confirmed by empirical data.

    Anyways, it should be noted that for the purposes of this experiment I counted a conversion as a click through to the about page to read more about the JumpBox technology. Down the road it will make more sense to count a download of our free trial as the conversion but I did it this way for now to get more data immediately on the effectiveness of the homepage graphic for moving people to that next page. I didn’t utilize the multivariate capabilities of GWO either- i used it more as an A/B split test (actually, A/B/C/D/E split). GWO can juggle permutations of headlines, graphics, text, calls to action, and any other displayable element on the page and intelligently report the winning assembly of items. Another aside, we used StumbleUpon advertising as a fire hose of semi-qualified traffic that we could turn on at will to accelerate testing. This worked very well.

    What you do if you are interested in using GWO

    The video tutorial from Google nails the setup process so I won’t rehash the steps for how to implement it. GWO is in private beta at the moment but it seems they’ve been letting in groups more frequently lately so sign up here and wait for your number to be called. You’ll know you’re in when you see this additional tab appear in your adwords account:

    GWOnewTabinAdwords.gif

    My only complaints about the tool so far are related to usability – it’s not quite there yet for non-technical users. You will need access to paste javascript code into your web pages as well as the technical ability to do so. I would love to see GWO use a single block of js that you install once and allows you to run experiments serially without having to strip out the old js and re-paste in the new. Google’s best move going forward with this will be to author plugins for the popular CMS platforms and ecommerce engines to simplify adoption and save people from ever having to futz with javascript at all. Other than those gripes, this is an amazing tool for people with web sites. No more speculative argument about “this design is way better than that one” – now there is a definitive answer to which designs and promotions work best.
    Have fun with it.

    Feb 04

    This is a genius feature they just added to Gmail:

    gmailOpenAsGoogleDoc.png

    Sending word docs back and forth over email sucks- it’s messy, you never know who has the most recent version of the document and the potential for overwriting and losing changes compounds exponentially as more people get involved. Google Docs is a great free service we used with our PR agency to collaborate on press releases. It allowed four of us to revise the verbiage without any questions of who had the latest copy. And it tracks revisions so that you can still get back to an earlier state of the document and see what changes have occurred between each revision.

    This new feature in gmail seems to actively scan your attachments to determine if one of them is a word doc and if it is, you get a link to open it as a google doc rather than having to download it. This feature alone is reason enough to get a gmail account if you don’t already have one and should helpful for companies that are forced to deal with inbound word docs.

    UPDATE: apparently this isn’t rolled out across all gmail accounts. Not sure why I’m seeing it- leave a comment if you have it in yours.

    Tagged with:
    preload preload preload