The Velocity 2008 Conference Experience – Part V
Welcome to the second (and final) day of the new Velocity Web performance and operations conference! I’m here to bring you the finest in big-hotel-ballroom-fueled info and drama from the day.
In the meantime, Peco had met our old friend Alistair Croll, once of Coradiant and now freelance, blogging on “Bitcurrent.” Oh, and also at the vendor expo yesterday we saw something exciting – an open source offering from a company called ControlTier, which is a control and deployment app. We have one in house largely written by Peco called “Monolith” – more for control (self healing) and app deploys, which is why we don’t use cfengine or puppet, which have very different use cases. His initial take is that ControlTier has all the features he’s implemented and all the ones on his list to implement for Monolith, so we’re very intrigued.
We kick off with a video of base jumpers, just to get the adrenaline going. Then, a “quirkily humorous” video about Faceball.
Steve and Jesse kick us off again today, and announce that the conference has more than 600 attendees, which is way above predictions! Sweet. And props to the program team, Artur Bergman (Wikia), Cal Henderson (Yahoo!), Jon Jenkins (Amazon), and Eric Shurman (Microsoft). Velocity 2009 is on! This makes us happy, we believe that this niche – web admin, web systems, web operations, whatever you call it – is getting quite large and needs/deserves some targeted attention.
First up is the open data talk from Ignite! last night; as the winner she is re-presenting. Go back and read my previous post for more, since I’ve heard it already.
EUCALYPTUS, by Rich Wolski, is an acronym for “Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems.” There’s a bunch of “cloud” solutions in the commercial space, from Amazon to 3tera to mosso. But as this technology emerges, the question comes up of haw it can be adapted to other uses- like scientific/academic, data assimilation, gaming, mobile… Some things work great with a grid as currently conceived but some don’t.
Note to NI: the data assimilation arena is something we should be more involved in. It’s core to our business as being all about test and measurement. See the new issue of Wired for more.
So they looked at nimbus, an interface to globus, and old reliable grid thing. And Enomalism. But neither fit their need. So they made Eucalyptus.
- Strict Web services
- Linux based, and using an interface designed to be compatible with Amazon EC2
- Low impact on installations, one-button install using Rocks (since SAs rare in academia)
Oh, I get it now. It’s an open source tool to encourage research in elastic/cloud/utility computing. So essentially a cloud “development environment.” Experiment with it, test for free, then move to a cloud.
They also have requirements the current clouds don’t serve well – like accountability of userids. And more. So I’m as “excited” about cloud computing as the next guy but I (and the people around me, it seems) have heard about enough of it for the week. He goes on at some length about the spiffy things Eucalyptus does that helps out people in real environments (IP scarcity, EC2 compatibility, admin roles, etc.) And they created “Rocks,” a cloud friendly one button install. Not sure what features it has that aren’t supported by puppet or whatnot – VM support? They use Mule, the open source ESB.
Lessons learned: A fully open source cloud was harder than it should be. And admins still seem to do lots of things by hands than using Rocks, let alone a cloudwide provisioning tool. They are planning to integrate with RightScale, and add VMWare supprot (just Xen now). So yay to your NSF dollars at work!
Next we’ve got Harold Prokop from Akamai. We’re interested because we’re doing a CDN evaluation at NI this year. “We didn’t really think of Akamai as a cloud, but since cloud is the new hot term, it’s a cloud!” That’s paraphrased from his talk, but not much.
The problem with traditional CDNs is that as more content is dynamic, less of it is cacheable. (Well, kinda.) So they talk about their new approach of “better routing” – since they have POPs everywhere, they can essentially do better-than-BGP routing and use WAN acceleration techniques (like not using the fat old bitch that is TCP) to speed up traffic across the Net. It’s called “Akamai SureRoute.” Seems like a good idea really. And intelligent prefetching. He also talks about ESIs (Edge Side Includes), a good idea- we’ve looked at them, as Oracle WebCache supports it too – to mark up parts of a page with differential cacheability rules. The thing that scares me about ESI is that no one else has ever picked up on this open standard; it’s been Oracle + Akamai for like 10 years, though it is a W3C standard.
How about moving business logic to the edge? Hmmm, sounds good but I wonder how realistic that is – you really have to factor your code carefully for that. But if you’re really into assembling from mini-components/services it could work.
Question: But we’re using Ajax and JavaScript – why do we need this? His answer is more diplomatic than mine, which is “you can’t implement much worthwhile in JavaScript.” For anyone who’s not a pure Web 2.0 play, that is. And Ajax is about communicating back with the server so then all this applies anyway, which means the question was more just keyword gibberish than well thought through. I mentally try to set the questioner ablaze but it doesn’t work.
Then there was a large break where we were supposed to go to the vendor floor, but instead we talked at length with Akamai and Coradiant. We’re looking into CDNs for ni.com and hence Akamai, and we’re Coradiant customers. We’ve had a kinda long and trial-filled path in getting our Coradiant device working; part of the problems were ours (hint: use taps not span ports) and part the product’s, but the bottom line is it’s not giving us good data yet and that makes the guy signing the checks grumpy. But they seem committed to getting us more engaged with them, getting on the advisory board, etc. That’ll be good – I think the RUM space and especially the network-sniff sort to be a very powerful tool in the APM suite and we’re making a strategic shift to it from synthetic for more granular managing of app performance with our developer teams.
The next segment is a whole set of demos of various nifty tools.
HttpWatch is demoed by Simon Perkins. It’s a HTTP sniffer and viewer. It shows you a waterfall for a request/response like YSlow or Page Detailer, but it’s pretty. The free version I just downloaded only shows the waterfall but you can’t drill down into a component – looks like the full version is $300. I wonder if it’s $300 better than the others though. And it’s hard to tell if the free downloadable is feature-crippled; I consider that bad form and prefer time-crippled. Now that there’s like 20 of these tools it’s time for a solid bakeoff.
Eric Lawrence demoes Fiddler. He works on IE and developed TamperIE, Fiddler, etc. Fiddler’s more of a “platform for debugging” than a tool like HttpWatch, he says. It runs as a local proxy. They developed it to fix Microsoft’s Web site. And it simulates lower bandwidth connections! And you can modify traffic. So it’s a very cool toolkit for more generalized work!
Eric Goldsmith of AOL shows AOL PageTest. It’s an open source IE plugin – or even you can use it online at www.webpagetest.org. Again, a waterfall. It also has optimization tips built in. One note – the browser version doesn’t have a legend in it anywhere for the colors, which is obnoxious (the web page version has a legend at the top).
Firebug in the hizzouse! John J. Barton shows it off. For the two of you who haven’t used it, it’s a Firefox plugin that lets you inspect/debug all kinds of stuff, especially JavaScript, CSS, and the DOM. Then you can profile the page. This’ll show you every JS call and how much time it took. Running this on the ni.com home page shows an unhealthy amount of junk from jquery, I note. But it all got over my head quick. I’m no JavaScript programmer.
Last for the morning, Sean Quinlan from Google talks about Storage at Scale. Their philosophy is very low end PC based – single machine performance isn’t interesting because partitioning problems is easy and even expensive hardware fails, so they build reliability into software not hardware. They’ve built GFS, a non-POSIX clustered file system, and Bigtable, a distributed non-SQL database. They need ginormous scaling of course.
Ernest thinks, “I’ve never understood why our server teams at NI insist on more expensive hardware – even after moving from Sun to Dell they insist on everything being dual power supply, RAID 5, etc. That’s why we have clusters. I want cheaper hardware!”
GFS spreads chunks of files out and duplicates chunks across servers. Fault tolerance is their largest concern – at scale commodity hardware is failing a lot. Checksumming and replication work well against this. He gets into details way too complex to reproduce here, if you care go look up GFS, there’s a paper on it.
Bigtable was “database-like” and built on top of GFS. More of a massive hash really, key/value pairs. They needed a way to keep semi-structured data at scale.
Lunch beckons and my butt is numb.
Leave a Reply