Web Admin Blog

Real Web Admins. Real World Experience.

Why is the Vignette Content Manager GUI Stuck in the 90’s?

Dear God, not another Vignette post?!?! What can I say? It’s all I’ve done for the past two-and-a-half days. For anyone who has used VCM, you know what I’m talking about. It’s a fairly powerful tool for content management, but it’s slow to the point that it’s almost unusable and the GUI design (web interface) is like something out of the late 90’s. While I’ve had plenty of on-the-job-training with VCM, I never really had the opportunity to ask questions of an “expert” like I have during this class so I started asking questions about alternate ways to do things. For example, a lot of the work that we do with VCM is done during a golive in the wee hours of the morning. It would be really nice if you could do some sort of scripted input instead of point-click-wait over and over again. So I asked the instructor where the GUI configuration stuff for VCM is stored. It turns out that they store it in the database instead of in some sort of configuration file. So, if you want to do something like add capabilities to a role, the only “supported” method of doing this is through their GUI. The slow and painful point-click-wait GUI. A task that should take seconds ends up taking an hour if you’re adding several roles with varying capabilities. The reason why I say that the GUI is stuck in the 90’s is because it seems like there are several technologies that have come along in the past 10 years that seem like a better way to do thing.

  • Batch Import/Export: Through the use of a text file, csv, xml or any other format it would be really easy to devise a method of being able to batch import and export roles and capabilities.
  • AJAX: Short for “asynchronous javascript and xml” Vignette could very easily adapt this technology to make a drag-and-drop sort of interface. This would come in especially handy when moving items up and down in a CTD definition.
  • Typeahead: The ability for me to begin typing a word and have the browser be searching for matches to autocomplete it. This would be a nice addition to Vignette’s “find” features.

Anyway, according to my instructor the 7.5 version of VCM won’t be making any major GUI improvements. There’s one addition from 7.3 to 7.4 that is slightly interesting and that’s the new “My Page” feature. As far as I can tell it’s the only place where Vignette has made any improvements based on web technology from the last 10 years. It’s usability issues like these that have more and more people opting for open source content management systems like Joomla these days. Vignette may be the 800 pound gorilla in the content management market, but if it continues to push slow and outdated web technologies, it’s days are surely numbered.

Eight Simple Ways to Make a Truly Awesome Training Class

I’ve been sitting in Vignette’s Content Management System Administration training class for a day-and-a-half now. The good news is that I’m learning a lot about VCM that I never knew before and even more about content management in general. We’ll save that topic for another blog post, but for now I’d like to talk about the the eight simple ways to make a truly awesome training class.

  1. Make absolutely sure the instructor of the class has never taught the class before. Maybe it’s a brand new class or maybe it’s a brand new instructor. Either way, if they’ve never taught the class before they’re likely not going to be able to answer the majority of the questions the students ask them.
  2. Teach the class with PowerPoint slides that contain only a white background with black font and a company logo at the bottom. Every once in a while throw in a confusing diagram, which the instructor struggles to explain since this is their first time teaching the material, to keep things interesting. Under no circumstances should you put any graphics on the slides other than the aforementioned diagrams. Graphics, fonts, and transitions are way too entertaining for a serious company like yours.
  3. When you teach a System Administration class, assume that your students have already installed the product so there’s no need to have them go through the installation steps themselves. Give them a VM image with the software pre-installed and use the powerpoint to show them how good you are at installing the product. It will instill confidence in the students in your knowledge and training abilities.
  4. Since you are already providing the students with a pre-installed VM machine, there’s no point in having several different images for your different product trainings. Merge them all onto the same VM image. Certainly this won’t confuse your students at all and it’s much easier for you to maintain a single VM image.
  5. Advertise that the training is for the version of your product that everyone is using, but then provide the training materials and slides for the newer version. You’re sure to get more students this way and now you’re able to show off the new-and-improved features of the other version. Once they see how great the new version is they’ll run off to upgrade as soon as they get back to the office.
  6. Provide a large fridge in the classroom and fill it with only Coke, Sprite, and Diet Dr. Pepper. Do not provide your students with water. No drinking fountains, no faucets, and certainly no water coolers. We all know that soda is mostly water and they could certainly use the sugar to keep them awake during the training.
  7. When deciding upon a location for the training, pick a spot with no windows as they provide too much of a distraction. Basements make an excellent location for training classes. If possible, have the classroom located next to some sort of employee common area. The students will hear the laughter of the employees and simultaneously think about what a great place it must be to work at and how much fun they are having in the training.
  8. Charge extra money for the training and then show the students how gracious you are by providing them with lunch. Don’t order in lunch though. Have the students walk across the parking lot and across the street to the local deli. They won’t mind eating at the same place every day and they could really use the exercise.

Well, that’s it for now. Feel free to comment if you have your own wonderful training experiences to share.

Scalr project and AWS

http://code.google.com/p/scalr/

For those of us getting into amazon’s Elastic Compute Cloud (ec2), this is a really cool idea.  The idea is that your load grows and a new node is ready to handle additional capacity.  Once load lessens, boxes are turned off.  Integrating this with box stats, response times, monitoring per service makes sense.

I wanted everyone to be thinking of the consumable computing model.  Pay as you go for what you use is really attractive.  No more do you have to have 10 boxes in your www cluster all day long if your spike is only during 8am to 3pm.   Now you can run the 10 boxes during those times and use less boxes during non peak times…  Pretty cool.  And cheap!

OpenX – Open Source Ad Server Research List

http://impactnews.grouphub.com/W1107356

I am compiling research on OpenX through forums, docs and other resources to help put together a more useful resource for first-timers. Check back for updates.

Why is anyone still using WEP?

Wireless internet access is everywhere these days.  Everyone from restaurants and bars to the average Joe Homeowner has some sort of wifi network set up.  The problem is that they set up these networks without giving security a second thought (or even a first thought in most cases).  I was at the TRISC conference last month and heard SimpleNomad say that he doesn’t pay for internet access anywhere any more because there’s always an unsecured or poorly secured wireless network wherever he goes.  Lately, I’ve been testing that and he’s absolutely right.  I’m the only person on my block not running either an open network or a WEP “protected” network.  I was even at a local hospital the other day and their “secure” internal network was using WEP. 

For those of you just catching up, WEP is an almost 10 year old wireless protocol whose intent was to encrypt your wireless transmissions.  The problem is that WEP uses a user-defined key along with an “initialization vector” (IV) to generate the RC4 traffic key used to encrypt your data.  If I can gather enough of these IV’s, then I can figure out what the key is and your network is now pwned.  I can speed up this process by injecting my own packets and I can get your key in under 3 minutes.  How’s that for security? 

So, why is anyone still using WEP?  It was deprecated as a wireless privacy mechanism back in 2004.  It is easily cracked and provides slightly more security than running an open wireless network.  All that and when you buy a new wireless router it’s most likely still pre-configured with WEP enabled.  On some of these older models better encryption standards such as WPA or WPA2 aren’t even options.  With much of the wireless network setup falling into the hands of novice users, some of the responsibility lies with the router manufacturers for even allowing them to use WEP.  The rest, in my opinion, is on the users themselves, who put up these networks without being educated enough to do so.  You wouldn’t put a door on your home without making sure the locks worked, would you?  How about buying a car where everyone with that model vehicle had your same key?  I think you get the picture. 

Log Management for Dummies (aka Splunk)

Logs are one thing that I think are severely underutilized by most systems administrators. Most of us have taken the first step by actually logging the data, but neglect organizing it into any sort of manageable form. You’ll probably argue that any hardcore *nix admin would be able to take the raw logs using grep, cut, awk, and a handful of other *nix power tools and turn it into consumable information, but that’ll only get you so far.

Several months ago we evaluated a bunch of log management solutions with several goals in mind. We wanted a solution that was agile enough to be able to take in a wide variety of log formats as well as configuration files. It needed to shield sensitive information (passwords, credit card information, etc) from unauthorized users. It needed to provide us with a customizable interface where we could report on all of the log data it gathered. Lastly, it needed to provide our customers (developers) with the ability to self-service their own log files. After evaluating most of the major players in the log management arena, we found our ideal solution in a product called Splunk.

The first thing I noticed when evaluating Splunk was that they’re not like everyone else. They’re not trying to sell you some sort of logging appliance and they offer their software free for customers with 100 MB/day or less worth of logging. Getting Splunk installed was a breeze. You can have it up and running in minutes. It truly is Log Management for Dummies in that respect, but under the hood there is a highly configurable and customizable tool with an API that you could use to write your own applications to examine log files.

At this point I’ve mucked around with Splunk for a few months and our configuration is pretty intense. I’ve added in custom indexes to make my custom dashboards load faster. I’ve set Splunk up to create queryable metadata fields based on information in the logs. I’ve added filters for custom timestamps and auditing so we can tell if a log file has been modified. I’ve even set up a “deployment server” to distribute Splunk’s configuration bundles to my various types of servers. This brings me to the one drawback of Splunk: Upgrading. Rumor has it that they are working on making it easier to upgrade from one version to the next, but for the time being it involves logging in to each server, stopping Splunk, upgrading the files, and restarting Splunk again. If you only had to upgrade every once in a while it would be fine, but they maintain a very active development team so I find myself constantly wanting to upgrade to get the latest bug fixes and features.

Other than that, Splunk does exactly what I tell it to do. It grabs all of our logs and presents them in a single intuitive interface. Think of it as a search engine for log and configuration files. Then, once I have the log data in front of me, I can create custom reports based on that data. If I want to, I can even alert based on information Splunk finds in my logs (send an e-mail to a developer every time their application throws an error message). Oh, did I mention that Splunk has a PCI Dashboard that you can install for free? Ask those other guys how much they charge for their PCI solution.

The next time you have some free time be sure to download Splunk and install it on one of your development servers. You won’t be disappointed.

PCI Security Scanning Services

Recently I’ve been doing a lot of work looking at various vendors for the vulnerability scanning portion of PCI compliance (PCI Requirement 6.5). I’ve been talking to many different companies. Some sell tools and some sell services. We’re looking at vendors to either supplement or replace our current tool set. The only real specific requirement in regards to PCI is that you need to follow standard guidelines such as the OWASP Top 10. Seems like a pretty simple task right? Not really. One vendor I’m talking to seems to be going out of their way to not give us an evaluation before we purchase. Granted, this particular vendor prides themselves on having manual checkpoints throughout their scanning process, so there is additional setup cost to them for an evaluation, but still. How can they expect a customer to drop tens of thousands of dollars on their product without evaluating what it’s capable of and comparing it to other vendors? Another vendor bombards us almost daily with calls asking “What can we do to get you to buy today?” I’ve explained several times that I want to do a comprehensive evaluation and compare their product to several others. Honestly, all this pushiness seems to do is make me think about what is so wrong with their product that they have to push this hard to sell. Every time their sales guy calls me, I cringe.

It’s not like our current solution is bad or anything. It finds what it’s supposed to find. Heck, it’s found some stuff that these other guys never did. National Instruments has invested a good chunk of change in these tools and I’m pretty happy using them. I was reading up on Dark Reading the other day when I came across a blog from John H. Sawyer from the IT Security Team at the University of Florida. He said…

I’m trying not to be cynical, but it’s getting to the point that choosing a reputable PCI scanning service for your Website is like politics, where you’re left choosing the lesser of two evils. If you’ve got experience, good or bad, with vendors such as McAfee, Qualys, or Rapid7, I’d be interested in hearing about them. I’d love to find a happy customer, and not one in the “ignorance is bliss” sense of the term.

I truly feel his pain. The vulnerability scanning space is full of vendors who promise the moon, but then fail to deliver on so many levels. Even if I buy the hype and purchase one of these miraculous tools or services, can I really consider us to be secure? Maybe I should just show all of these vendors the door and opt for Scanless PCI instead as it’d probably help me sleep just as well at night.