Mar 09

It can’t be assumed it will reach its intended recipient.

It’s not actually a new phenomenon but it seems the deliverability of application-generated email has fallen to a point where a letter sent via the US Postal Service is more likely to reach its intended recipient. Let me explain.

Many services (including our own) use email as an integral part of the service itself. Account activation, critical system notifications, trial key issuance, software update alerts, billing-related communications: email is the transport mechanism we rely upon because it’s realtime and it’s the lowest common denominator for reaching a user. The recent preponderance of SPAM however and (consequent aggressiveness of spam filters) has rendered email unreliable for this purpose.

Person-generated emails still seem to make it 99% of the time but I’d guess the deliverability of our automated emails is maybe 85%. In scenarios of account activation it’s merely an annoyance but in scenarios of proactive notifications of important events this is a real issue. Failing to receive those communications can have real material impact to the customer.

How are folks dealing with the unreliability of email in their apps? Are you staying within the realm of email and seeking better ways to ensure delivery? Exploring alternate communication mediums like SMS or IM’s? Offering personalized, protected RSS feeds of account activity? Or has someone developed a web service that can launch carrier pigeons?

Jan 22

I saw this post from my friend Andrew Hyde on the homepage of Tech Meme today and judging by the number of reactions he got his story struck a nerve. Long story short: in the course of using LBS apps like Bright Kite and Foursquare to announce his location he picked up a stalker who would coincidently “bump into him” wherever he went. Creepy.

So the “people knowing where I am and stalking me” scenario is one potential negative implication of using these types of services. But there’s another to consider:

Not only do these services tell the world where you are, they also tell the world where you aren’t.

My friend Bill said it most eloquently the other day when I had posted this tweet:

PHX -> SFO

This is a pretty standard convention when you’re going on a trip. He cleverly responded:

Bill -> Sean’s house -> Pawn Shop -> Casino

And immediately I realized he’s right.

Twitter is just one surface area too. I also have my LinkedIn account integrated with my Tripit account so that it passively tells my contacts when and where I’m traveling. Presumably there’s no threat from people you’re connected to but as these social networks gravitate towards being more and more public (as FB has demonstrated recently) innocent location announcements to trusted friends become inadvertent invitations to burglars with remedial googling skills. Add in a little smoke screen creativity by placing a hoax Craigslist ad and you have a repeatable formula for low-risk burglaries.

Something to think about.

Dec 31

psychicHere’s ten things I predict we’ll see in the IT/computing industry in 2010 (and yes, I’m biased about some given the world we live in at JumpBox):

  1. Self-healing applications become commonplace: We’ll see the rise of preventative and predictive technologies that fix problems in applications before they become fatal. Monitoring systems can already intelligently scale computing resources allocated to an application by detecting when it’s hitting a resource wall. But beyond this capability we’ll see a new set of tools arise that automatically intercedes and conducts repairs on the fly by reverting to a snapshot of the app and re-injecting data. This won’t be for financial applications and mission critical apps but it will happen for apps that need high availability with data that’s “good enough.” The net effect will be that the apps are perceived as being more stable when in reality the real hero is this adaptive repair technology behind the scenes.
  2. “Brick laying” in IT gets commoditized and the IT admin’s focus returns to architecture: By “brick laying” I mean the tedious, manual processes of maintaining and provisioning applications on the network. Virtual appliances deployed on private clouds will free admins from the menial chores of wedging the next PHP app onto an existing server and enable them to focus on proactive rather than reactive pursuits. Some admins will fear obsolescence and seek job security by keeping practices esoteric and arcane but the smart ones will realize their craft is merely shifting to the more interesting duty of architect with a focus on how to leverage things like virtualization and cloud computing to keep users happy.
  3. Balkanization of non-critical IT systems in the enterprise: We’ll see the proliferation of small, rogue collaborative applications in the enterprise. This will stem mainly from the frustration of being shackled by the company’s monolithic enterprise collaboration system. As self-serve deployment of collaborative apps becomes more feasible for non-technical folks the do-it-yourselfers will circumvent IT altogether and implement the apps that make their jobs easier. These transient, project-specific apps will blossom, serve their short-lived purpose and then vanish without ever involving IT. The more territorial admins will see this as chaos and try to retain control while the enlightened ones will realize that non-critical app governance is merely being pushed out to the edges where it belongs.
  4. Someone successfully addresses data interoperability amongst SaaS and local apps: As these silo’d supporting applications sprout up both inside and outside the firewall, it becomes important to have a way to share and manipulate data amongst them. Technologies for deploying the apps will have made them trivial to deploy but the connective tissue like REST and SOAP APIs will still be way too technical for the layperson to use. ETL (data Extraction, Transformation, Loading) products like Jitterbit, Talend and Snaplogic will put more control in the hands of the business user and empower them to do useful things with the data from these disparate apps. Laypeople will be able to snap together data streams like lego blocks and make the things they need without involving a developer. The intuitiveness of the IDE for the lego-building apps will be paramount and a superior UI will emerge and become THE way it’s done (making one of those ETL companies a boatload of money). The other piece of the puzzle will be the presentation layer for consuming the data from these ETL apps. You’ll see more press releases like this one in which the presentation/collaboration product companies join forces with the ETL companies under the realization that peanut butter and chocolate just taste better together.
  5. Minority/Majority shift between desktop apps and web apps: I don’t have the current figures on desktop vs. web application usage (and I’m too lazy to look them up) but we’ll see a majority of one’s work conducted via the browser. This has been a trend in progress for some time but 2010 is the year that the perfect storm occurs where: connectivity improves sufficiently such that latency is negligible, web apps interfaces match the usability of desktop apps, there becomes a critical mass web-based alternatives for all former desktop-only apps and the ubiquity of access becomes crucial as necessitated by remote workers and telecommuting requirements.
  6. Trials become the new black: The traditional practice for ISV’s promoting a white paper that then promotes the download of their software will be replaced by landing pages that offer immediate trials right in the browser. The advent of mechanisms for delivering a fast & convenient hands-on experience will remove friction from the sales process. There will no longer be that step where the vendor needs to convince prospective users to expend energy to download & install software for the purpose of investigation.
  7. Social networking fatigue sets in and blogging sees a resurgence: People will get burnt out on the barrage of micro-updates from services like Facebook and Twitter and divert their precious thought cycles to fewer sources that serve as “lenses” and provide more depth. Twitter and FB will continue to experience insane growth and conversations will still occur via those channels but people will feel their mojo zapped and rediscover the .
  8. A major privacy breech casts doubt over enterprise use of SaaS for critical data: Cybercriminals will become more advanced and we’ll see a major breach of a high-profile SaaS provider like Salesforce. This will create a backlash that staunches the migration of IT operations to SaaS providers. The press will scream that the sky is falling, middle managers in IT will read articles and regurgitate headlines to CIO’s who will look for alternatives that deliver the same convenience factor of SaaS whilst satisfying the need to run on-premise. And JumpBox will be there to deliver ;-)
  9. Open Source gains mainstream acceptance: The stereotype of crappy UI’s and hard-to-use software will be gradually shed as apps like WordPress continue to deliver kickass user experience and win a huge number fans. Proprietary app vendors will cry, spread FUD and cling to a receding coastline only to see it inexorably washed away by OSS. There will still be a place for proprietary apps around niche situations but one by one the OSS substitutes for things like CMS’s and ERP systems will overpower their proprietary counterparts.
  10. An as-of-yet-to-be-discovered use of mobile phones becomes huge: In the mobile space companies will continue to build stuff nobody really wants (ie. ways to get spammed with location-specific coupons as you walk by a Starbucks). Meanwhile in a basement somewhere a small team will conceive and develop a killerapp for mobile that’s actually useful (either a consumer-facing app or a data mining app that’s sold to service providers). In the consumer space perhaps it’s a convenient 3-factor security mechanism that ensures your laptop can only be accessed when your bluetooth phone is with a few feet? Or maybe a clever way to facilitate ad hoc carpools amongst participants? On the data analysis side it may be a way for the CDC to model the spread of an epidemic via cell phones or a service for municipalities to do more intelligent traffic routing based on cell activity.

Do you agree or disagree with any of these? Do you have any predictions of your own you can share?
If you want more to ponder Read Write Web has some insightful predictions from its contributors. Here’s to computing awesomeness in 2010!

Tagged with:
Oct 14

Here’s an interesting debate we had this morning in our office:

Would you consider this Twitter account SPAM?

Or the deeper question here: how do you define SPAM?

  • By a certain practice used to reach people?
  • By any unsolicited message with commercial-serving intent?
  • By a shotgun-style approach in communication?
  • By the relevancy of the message to the recipient?
  • It can’t be left to a completely relativistic definition because it becomes impossible to make laws to protect against it (ie. the one guy that happened to be wanting to buy viagra this morning finds the SPAM email to be very timely and useful, but that doesn’t justify the annoyance for the rest of us). On the other side of the continuum, it can’t be boiled down to specific practices because that’s what Bruce Schneier would call “the futility of defending the targets.” Here’s my position on the matter:

    I monitor key phrases on Twitter, certain sequences of words that indicate a user has a problem that one of our free JumpBoxes could solve. I skim hundreds of these tweets and select the few that we can help and respond to them individually introducing them to our product. I documented this technique awhile back. I’d say all but two of the 68 responses I’ve gotten from reaching out to people in this way have been received with appreciation. Two people have responded calling foul.

    According to the Twitter TOS the account above clearly violates the “If your updates consist mainly of links, and not personal updates” rule. But that could be satisfied by peppering it with personal updates and fluff. The reason I don’t do this out of my personal account or our JumpBox account is because doing so would inundate the followers of those with a bunch of repetitive info that’s uninteresting to them. But I digress. The point is there are ways to satisfy the TOS requirements but that just feels shady. I can see someone making the argument that this technique is not the “personal updates” spirit of use of what Twitter intended. I get that.

    But here’s what I don’t understand:

    • Making a freeware product recommendation for someone else’s product on a mailing list in response to a need that a participant expresses. Completely 100% kosher and expected.
    • Making a freeware product recommendation that’s your own on a mailing list when appropriate… cheesy maybe but still completely appropriate.
    • Making a freeware product recommendation of your own product in a distributed micro-blogging environment like Twitter where you single out a recipient who expresses a need your free product solves and you direct a thoughtful reply to that person… sorry but I see that as a legitimate way of reaching out to people. It’s not like you’re cluttering their inbox- it’s a message that appears on their @replies page in Twitter.

    If you were tying to sell them something- okay, I agree. If you were repeatedly harassing the same person- gotcha. But a one-time message that makes them aware of a solution that’s free and completely unique such that they would never know to search for it in the first place, I don’t see the SPAMiness in that. Anyways I’m probably going to be discontinuing this practice not because I think it’s spammy but because the return isn’t there time-wise.

    What do you think about this practice and the bigger question of how do we define what constitutes SPAM in the evolving world of social media?

    Jul 27

    HNSort.com is an app I threw together this weekend that allows readers to sort the stories on Hacker News by various criteria (rank, points, comments, title, domain, submitter, and age).

    This mini project spawned from two frustrations: 1) my dissatisfaction with the interface for reading the site 2) a desire to have an atomic project that I could complete and be done with in a weekend.

    I check HN periodically throughout the day in between tasks. But rather than reading every headline I skim the site to find the posts that are most important (as indicated by a high number of comments and points). Unfortunately there’s no easy way to find those gem posts, you end up having to sift through each post. So in the spirit of the site itself (ie. hacking stuff to make it work they way you want) I wrote a different interface for it. For anyone interested in the details I’ll explain below how the app works and the backstory on how I made it.

    hnsort

    The backstory

    The main goal was to get a convenient way to quickly find the gems on Hacker News without having to manually skim through each story. Ideally I wanted something that would work both on my computer as well as my iPhone. And as a bonus I thought it would be neat to expose it so others could use it, and in so doing provide us some cheap, targeted advertising for JumpBox to an audience that would appreciate it. I knew given the nature of the app that it would probably do well on HN itself.

    I looked briefly into what it would take to write a Greasemonkey FF extension but my javascript skills are wretched and even if I were able to make that work, it wouldn’t help for reading on the iPhone nor would there be any promo benefit to JumpBox. So I concluded it would need to be a mashup that was accessible via the web.

    There is no public API to HN so the first step was to create one using Dapper. This was the easiest part of the whole project. Their wizard makes it ridiculously easy to turn any webpage into a feed of XML, JSON, RSS, whatever you need. It took all of five minutes to make this dapp to produce a real-time XML feed of stories off their homepage. So far so good.

    The next thing I tried was to head over to Mindtouch and fire up a free express account and use their Mindtouch Core product to render the results in a sortable table. Again this took all of five minutes to produce this result which was promising but lacked the sorting capability. Unfortunately adding the sorting feature would prove to be significantly more difficult. After a few hours of tinkering with Dekiscript (their proprietary scripting language) I eventually gave up – I’m sure there is a way to iterate over an XML result set using Dekiscript but I certainly couldn’t figure it out even with a ton of good documentation.

    At this point I tried one last gasp effort to solve this using a free pre-made tool: I knew Google Spreadsheets had the ability to import XML and JSON feeds. And a Google Spreadsheet can be sorted six ways from Sunday so all good there. Hopeful about this avenue, I went and tinkered for about an hour trying to get the import to work per the Google documentation but sadly had to give up. Apparently Google just didn’t like the XML feed. Sigh.

    Having run out of options I decided at this point to dust off the Coldfusion skills and try to code this thing from scratch. What would have been ideal at this point would have been a JumpBox for Railo or BlueDragon. Instead I futzed around trying to find an online sandbox where I could develop without having to install anything on my Mac. I opened an account here but sadly the CFHTTP tag I needed to use was malfunctioning on their system. I then opened up a $5/mo hosting account with Hostek only to learn that they disable the CFDUMP tag which is key when developing with nested structures and result sets. I ended up installing the standalone from Adobe on my Mac and making the site there.

    After a few hours of tinkering I had it consuming and displaying the results in a table. There was another hour of scrubbing and transforming the data so all the numbers were sortable. The last step was to add in the Tablesorter jQuery plugin. And the final result was exactly what I wanted: a simple HTML spreadsheet of all the articles on the homepage of HN. For you coders here’s the single page of code that handles everything.

    Granted this ended up occupying most of my weekend but it was a great exercise in learning about a bunch of different technologies. I submitted the page to Hacker News and it rose to #3 on the homepage last night with significant momentum. Sadly when I woke up this morning my provider had experienced a DNS outage rendering the site unreachable since last night and therefore cutting it down while it was in its prime. You only get one shot at the homepage of HN so I have no idea how people will find it now :-(

    But all in all a good learning experience with an output that I can (and will) use from now on for reading that site. At $5/mo it’s worth it to me for my personal use alone. And the good news is that it even works on the iPhone. If you’re a fan of HN try it for reading that site and tell me what you think.

    Jul 24

    spartannerds
    TempeNerds got its 300th member today. This is a monthly lunch gathering I organize to bring together techies from Phoenix Metro. The thinking is that the better we know each other’s talents and businesses, the more we can make appropriate referrals. This group has been growing steadily since its inception a year ago and saw a significant influx of new members with the last lunch we did at Terralever.

    Groups like Nerds, Geek ‘N Eat, Gangplank activities and Reopen Phoenix are badly needed to compensate in metro areas like Phoenix that suffer from massive urban sprawl and fragmented communities. If you’re here and know a fellow techie that hasn’t been to one of these group events, follow the action on Eventification and bring that person out to the next event. Help the nerds prevail.
    We. Are. SpartaaAAAAAA!

    Any other worthy local tech groups I failed to mention?

    Tagged with:
    preload preload preload