Web Admin Blog

Real Web Admins. Real World Experience.

Getting the Real Administrator Access to Time Warner RoadRunner’s Ubee Cable Modem

This post is going to be short and sweet as it’s something I meant to put up here when I found it sometime back in mid-2011.  I’m not even sure if Time Warner is still using these Ubee cable modems for their RoadRunner offering, but I’m sure that there are at least a few people out there who still have them.  When you get the modem installed initially, they give you some default credentials.  Something like user/user or admin/admin.  Using these credentials, you are able to access the device and many of the features that it has to offer you.  What you are not able to do is access the menus where you can change how the router is actually configured for internet access, change the master password, or prevent Time Warner from accessing your modem, and subsequently, your network.  To fix this, you just need to know the following secret…

The real administrator username that comes configured on these modems when you get them from Time Warner is the last eight digits of the unit’s MAC address sans the colons separating out the values.  This is unique to your device, but can be found pretty easily by looking at the user interface that you do have access to.  The password for this user is “c0nf1gur3m3”.  Use that and you should be in.  Feel free to change the password while you’re in there to keep the Time Warner folks out.

One other kinda secret thing to note is that if you do want to change how the router is configured for internet access, you will need to go to http://192.168.0.1/TlModeChange.asp on your router to do so.  Once there, you can change it to Bridge mode, NAT mode, Router mode, or NAT Router mode depending on what you are looking to do with it.  Hope you enjoyed this simple solution for getting the real administrator access to Time Warner RoadRunner’s Ubee cable modem.

***Update:  If the above isn’t working for you on Time Warner Cable, try one of these suggestions from the comments:

  • Username: admin / Password: cableroot
  • Username: technician / Password: C0nf1gur3Ubee#
  • Username: admin / Password: C0nf1gur3Ubee#

Are Invisible Barbarians At Your Gates?

A couple of weeks back, HD Moore posted a blog entry entitled
Security Flaws in Universal Plug and Play: Unplug, Don’t Play” supporting a Rapid7 Whitepaper in which he discusses the 81 million unique IP addresses that respond to UPnP discovery requests on the Internet and the 23 million fingerprints that match a version of libupnp that exposes the systems to remote code execution.  His research on the subject is fascinating and I highly recommend reading it over, but that’s not the reason why I’m writing this.  The first question this research had me asking myself is whether or not my organization utilizes UPnP for anything.  As far as I can tell, the answer to this question is, thankfully, no.  Next, out of curiosity I began to wonder how many people were out there actively trying to find these exploits.  A perfect opportunity to fire up our new LYNXeon tool.

Our LYNXeon tool is configured to consume NetFlow data provided by literally hundreds of routers and switches in our global environment.  One of the most interesting things about it is that it can be used to see the traffic that comes in from our edge routers before it gets squashed by our firewall.  Utilizing this tool in this way, we can visualize the so-called “Barbarians” at our gates.  These are the hackers that are out there trying to find the weak spots in our security in order to get in.  And since I know that UPnP is not a service that we offer up to the Internet at large, it makes finding the guys who are looking to exploit it that much easier.

I fire up LYNXeon and my first step is to generate what is known as “PQL” or “Pattern Query Language”.  While their Cyber Analytics Catalog offers up a ton of templates to use to find potential threats, PQL is the base of all those queries and writing your own allows you to define your own catalog of things to look for.  The language is pretty easy to understand.  First you define the characteristics of the connections that you are looking to find.  After doing some research, I found out that HD was looking for openings on UPnP’s Simple Service Discovery Protocol (SSDP) service which typically runs on UDP/1900.  So, my query is for connections from external source IPs to internal source IPs using the UDP protocol on port 1900.  Once the connections have been defined, all that is left to do is define the data that you want to see in the results.  In total, my PQL code is 15 lines of code:

ssdp

Now it’s officially time to make these invisible Barbarians visible.  I tell LYNXeon to only show me results over the last day (to reduce the amount of time the search takes) and then tell it to “Execute Pattern Search” using the pattern file that I just created.  Searches will vary in time based upon the timeframe searched, the number of forwarding devices, and how complicated your search criteria are.  For me, this search returned 539 results in one minute and 38 seconds.

complete

Now that I have results, I just need to select how to view them.  My personal favorite is viewing the results in the Link Explorer.  This will show my data as nodes on a pictoral graph.  I make one quick adjustment using a organizational feature called “Force Directed Layout” to make the pictures look pretty and voila!

 zoomedout

OK, so zoomed out it looks like a bunch of spider webs.  Now the fun begins as we begin zooming in on each cluster to see what is going on.

 zoomedin

I’ve blacked out the IP address of the system these guys are connecting to as it is irrelevant for the purposes of this post, but you can clearly see that in the past day this one system has had eight unique IP addresses attempt to connect to it on UDP port 1900.  I’ve got dozens more just like these on that big graph above with varying degrees of complexity.  From here, LYNXeon allows me to resolve DNS and/or ARIN names for the associated IP addresses.  I can also expand upon those sources to see what else of mine they’ve been talking to.  Is that cool or what?  It’s taken me minutes to find these potential threats and with little more than a few clicks of the mouse.  The Barbarians are most definitely at my gates silently pounding away and chances are pretty good that they are doing the same to you.  The question is….can you find them?

Visual Correlelation of Security Events

I recently had the opportunity to play with a data analytics platform called LYNXeon by a local company (Austin, TX) called 21CT. The LYNXeon tool is billed as a “Big Data Analytics” tool that can assist you in finding answers among the flood of data that comes from your network and security devices and it does a fantastic job of doing just that. What follows are some of my experiences in using this platform and some of the reasons that I think companies can benefit from the visualizations which it provides.

Where I work, data on security events is in silos all over the place. First, there’s the various security event notification systems that my team owns. This consists primarily of our IPS system and our malware prevention system. Next, there are our anti-virus and end-point management systems which are owned by our desktop security team. There’s also event and application logs from our various data center systems which are owned by various teams. Lastly, there’s our network team who owns the firewalls, the routers, the switches, and the wireless access points. As you can imagine, when trying to reconstruct what happened as part of a security event, the data from each of these systems can play a significant role. Even more important is your ability to correlate the data across these siloed systems to get the complete picture. This is where log management typically comes to play.

Don’t get me wrong. I think that log management is great when it comes to correlating the siloed data, but what if you don’t know what you’re looking for? How do you find a problem that you don’t know exists? Enter the LYNXeon platform.

The base of the LYNXeon platform is flow data obtained from your various network device. Regardless of whether you use Juniper JFlow, Cisco NetFlow, or one of the other many flow data options, knowing the data that is going from one place to another is crucial to understanding your network and any events that take place on it. Flow data consists of the following:

  • Source IP address
  • Destination IP address
  • IP protocol
  • Source port
  • Destination port
  • IP type of service

Flow data also can contain information about the size of the data on your network.

The default configuration of LYNXeon basically allows you to visually (and textually) analyze this flow data for issues which is immediately useful.  LYNXeon Analyst Studio comes with a bunch of pre-canned reporting which allows you to quickly sort through your flow data for interesting patterns.  For example, once a system has been compromised, the next step for the attacker is often times data exfiltration.  They want to get as much information out of the company as possible before they are identified and their access is squashed.  LYNXeon provides you with a report to identify the top destinations in terms of data size for outbound connections.  Some other extremely useful reporting that you can do with basic flow data in LYNXeon:

  • Identify DNS queries to non-corporate DNS servers.
  • Identify the use of protocols that are explicitly banned by corporate policy (P2P?  IM?).
  • Find inbound connection attempts from hostile countries.
  • Find outbound connections via internal protocols (SNMP?).

It’s not currently part of the default configuration of LYNXeon, but they have some very smart guys working there who can provide services around importing pretty much any data type you can think of into the visualizations as well.  Think about the power of combining the data of what is talking to what along with information about anti-virus alerts, malware alerts, intrusion alerts, and so on.  Now, not only do you know that there was an alert in your IPS system, but you can track every system that target talked with after the fact.  Did it begin scanning the network for other hosts to compromise?  Did it make a call back out to China?  These questions and more can be answered with the visual correlation of events through the LYNXeon platform.  This is something that I have never seen a SIEM or other log management company be able to accomplish.

LYNXeon probably isn’t for everybody.  While the interface itself is quite easy to use, it still requires a skilled security professional at the console to be able to analyze the data that is rendered.  And while the built-in analytics help tremendously in finding the proverbial “needle in the haystack”, it still takes a trained person to be able to interpret the results.  But if your company has the expertise and the time to go about proactively finding problems, it is definitely worth looking into both from a network troubleshooting (something I really didn’t cover) and security event management perspective.

Roadrunner Extreme Broadband Beta

I was having lunch with Charles Henderson from Trustwave Spider Labs the other day and he mentioned that he had just gotten signed up with the new Roadrunner Extreme Broadband Beta from Time Warner Cable. He mentioned insane download and upload speeds as well as the new DOCSIS 3.0 compliant modem. It was enough to pique my interest and get me to call Time Warner.

I have been on the older Roadrunner Turbo-charged plan since basically when it first came out and have been generally happy with the service up until recently when I’ve started having to reboot the modem daily. I’m also kind of an internet speed addict so the idea of moving up to 20 MB/s downloads and 5 MB/s uploads was pretty sweet to me. That’s just to start with as eventually the service will have 30 MB/s downloads. I called up Time Warner and asked what it would take to move onto the Extreme Broadband Beta and they told me that it was only an extra $5/mo over my Turbo-charged plan. Even better was that they were offering free installation as part of the Beta. They were able to get the install scheduled just over a week out. Not too bad.

The service technicians came out on the designated day and time and got everything hooked up for me. They even replaced a bunch of the wiring on the box on the side of the house where the service connects to. They did some line tests and within minutes I was up and running on the new service. While not the 5 MB/s upload that was advertised to me, the download speed is quite impressive. Check it out:

The other cool thing is that while not necessarily intended, it is very easy to get into the new ubee modem’s configuration interface. By default, the device comes up as 192.168.0.1 on your network and has a username and password of user/user. Get in there and it’s got all of the configuration options of a wireless internet gateway. The first thing that you should do is change the username and password. After that, enable the wireless network, configure port forwarding, etc.

Not only does the new modem have built-in wireless N, but it also has four additional network ports so you can use it with multiple computers on your network. I remember the days when Time Warner used to charge you if you had more than one computer, but not anymore.

Granted, I’ve only had the new service for a few hours now, but I’m already pretty impressed. If you’re an internet speed demon like me, and you live in the Austin area, I’d recommend that you give Time Warner a call and ask about switching over to the new Roadrunner Extreme Broadband Beta. Enjoy!

Demanding Secure Developers

Much like many other companies these days, National Instruments hires many of our developers straight out of school. Many times when engaging with these new hire developers, I will ask them what kind of security they learned at their university. In almost all cases I’ve found that the answer hasn’t changed since I graduated back in 2002. Occassionally I’ll get a developer who mentions one particular professor or class where they discussed secure coding practices, but most of the time the answer is “I didn’t learn security in school”. This absolutely kills me. It’s like asking an architect to design a building without them knowing anything about support structures and load distribution. The end result may look awesome on the outside, but the slightest breeze will knock it over. With computers being embedded into literally every aspect of our society, do you really want code that crumbles the moment a user does something other than what was explicitly intended?

This leads me to the conclusion that security should be considered a fundamental part of code development and not an afterthought. We should be teaching security to students at a University level so that when they graduate, corporations don’t spend valuable time re-training them on proper development techniques. I’ve heard rumors of large companies like Oracle actually being able to impact college curriculum by telling universities they simply won’t hire developers without security training. Unfortunately, most companies aren’t in a position to make demands like that, but it certainly wouldn’t hurt to develop relationships with faculty at your local university and tell them what you’d like to see out of their students. I did some poking around on the internet and it seems like some professors are already starting to get the memo. For example, I found a great paper written by three professors at the USAF Academy Dept. of Computer Science called Incorporating Security Issues Throughout The Computer Science Curriculum where they say:

While the general public is becoming more aware of security issues, what are our universities doing to produce graduates ready to address our security needs?  Computer science as a discipline has matured to the point that students are regularly in tructed in software engineering principles–they learn the importance of life cycle issues in the development and maintenance of software.  Where are they receiving similar instruction on security concerns in the software life cycle?  The authors propose that security should be taught throughout every computer science curriculum–that security should always be a concern and should be considered in the development of all software just as structured programming and documentation are.

Gentlemen, I couldn’t agree more.  Security needs to be a foundational piece of every Computer Science program in the country.  Not one class.  Not one professor.  Secure programming techniques need to be a consideration in every CS class in every university.  Universities teach students how to write functions, create object-oriented code, and do proper documentation, but when graduates don’t know the basic tenets of input validation, then we have a real problem.  If you agree with me, then I challenge you to write to the Dean of your local CS program and ask them what they are doing to ensure graduates are familiar with secure coding practices.  I’d be very interested in hearing back from you as to what their response was.

Physical Security FAIL :-(

Notice anything wrong with this picture?

Iron mountain lock is unlocked.

I was walking by one of the Iron Mountain Secure Shredding bins at work one day several months ago and noticed that the lock wasn’t actually locked. Being the security conscious individual that I am, I tried to latch the lock again, but the lock was so rusted that it wouldn’t close as hard as I tried. I can’t just leave it there like that so I call the number on the bin’s label and there is an automated message that tells me that they’re not taking local calls anymore and gave me a different number to try. I call that number and they ask me for my company ID number which I had no idea what it was. She informed me that without that ID number I couldn’t submit a support request. I informed the lady that this bin contained sensitive personal and financial information and that the issue couldn’t wait for some random company ID to be found. Fortunately, she gave in and created the support ticket for me saying that I should hear back from someone within four hours.

One week later, on Friday, Iron Mountain finally calls me back and says that they will come to replace the lock the following Monday before 5 PM. When the lock hadn’t been replaced yet on Monday evening, I called Iron Mountain back up. Looking at their records, they showed that a new lock had been delivered, but they had no idea where and the signature was illegible. I work on a three-building campus with 14 floors between them and almost 3,000 people. If they can’t tell me where the lock is, then there’s no way for me to track it down. They said that they would investigate and call me back.

After not hearing back from them again for a couple of days, I called them back. The woman I spoke with had no real update on the investigation. She said that she would send another message “downstairs” and escalate to her supervisor. At this point it had been almost three weeks with sensitive documents sitting in a bin with a malfunctioning lock. The next day they called me back and said they were never able to track down who the new lock was left with so they would bring us a new one at no charge. Finally, after a total of 24 days with a unlocked Secure Shredding bin, Iron Mountain was able to replace the lock. Iron Mountain……FAIL.

Static Application Vulnerability Testing: Binary Scanning vs Source Code Scanning

I had a meeting yesterday with a vendor who sells a SaaS solution for binary application vulnerability testing. They tell a very interesting story of a world where dynamic testing (“black box”) takes place alongside static testing (“white box”) to give you a full picture of your application security posture. They even combine the results with some e-Learning aspects so that developers can research the vulnerabilities in the same place they go to find them. In concept, this sounds fantastic, but I quickly turned into a skeptic and as I dug deeper into the details I’m not sure I like what I found.

I wanted to make sure I fully understood what was going on under the hood here so I started asking questions about the static testing and how it works. They’ve got a nice looking portal where you name your application, give it a version, assign it to a group of developers, and point it to your compiled code (WAR, EAR, JAR, etc). Once you upload your binaries, their system basically runs a disassembler on it to get it into assembly code. It’s then at this level that they start looking for vulnerabilities. They said that this process takes about 3 days initially and then maybe 2 days after the first time because they are able to re-use some data about your application. Once complete, they say they are able to provide you a report detailing your vulnerabilities and how to fix them.

The thing that immediately struck me as worth noting here was the 2-3 day turnaround. This means that our developers would need to wait a fairly substantial amount of time before getting any feedback on the vulnerability status of their code. In a world full of Agile development, 2-3 days is a lifetime. Compare that to static source code testing where you get actionable results at compile time. The edge here definitely goes to source code testing as I believe most people would prefer the near-instant gratification.

The next thing worth noting was that they are taking binary files and disassembling them in order to find vulnerabilities. This lends itself to one major issue which is how can you determine with any accuracy the line number of a particular vulnerability written in let’s say Java from assembly code generated by disassembling the binaries. By default, it’s simply not possible. This vendor claimed that they can by adding in some debug strings at compile time, but even then I’d contend that you’re not going to get much. I’m guessing they have some heuristics that are able to tell what function generated a set of assembly code, but I’m extremely skeptical that they can do anything with variable names, custom code functions, etc. I’ve seen some source code scanners, on the other hand, that not only tell you what line of code is affected, but are able to give you an entire list of parameters that have been consequently affected by that vulnerability. The edge here definitely goes to source code testing.

The main benefit that I can see with binary testing vs source code testing is that we can test code that we didn’t write. Things like APIs, third-party applications, open source, etc are all things that we now have visibility into. The only problem here is that while we now can see the vulnerabilities in this software, they are unfortunately all things that we can’t directly influence change in, unless we want to send our developers off to work on somebody else’s software. I’d argue that scanning for vulnerabilities in that type of code is their responsibility, not ours. Granted, it’d be nice to have validation that there aren’t vulnerabilities there that we’re exposing ourselves to by uptaking it, but in all honesty are we really going to take the time to scan somebody else’s work? Probably not. The edge here goes to binary testing with the caveat being that it’s in something that I frankly don’t care as much about.

This isn’t the complete list of pros and cons by any means. It’s just me voicing in writing some concerns that I had about the technology while talking to this particular vendor. In my opinion, the benefits of doing source code testing far outweigh any benefits that we could get from testing compiled binary files. What do you think about the benefits of one versus the other? I’d certainly love for someone to try to change my mind here and show me where the real value lies in binary testing.

Auditors Just Don’t Understand Security

Part of my new role as the Information Security Program Owner at NI is taking care of our regulatory compliance concerns which means I spend quite a bit of time dealing with auditors. Now auditors are nice people and I want to preface what I’ll say next by saying that I think auditors do perform a great service to companies. I’m sure that most of them are hard-workers and understand compliance requirements probably better than I do, but they just don’t understand security.

As a case in point, we’re in the middle of our annual audit by one of those “Big Four” audit firms which I won’t name here to protect the innocent. I sent an email checking in with our auditors to make sure that they had everything they needed before we went into our four-day holiday weekend. They said that they had received everything they needed except for documentation on “privileged users from the current OS and Database environments” as well as “evidence of current password settings from the application servers, OS, and Database”. We go through a round of translation from Auditorese to Techie and figure out that they want exports of some specific user, profile, role, and privilege tables from the database and copies of /etc/passwd, /etc/shadow, and /etc group from the servers.

So we obtain the requested documentation and I shoot them back an email message to find out their proposed method for transferring the files. Secure FTP? No. PGP encryption? Nope. Their response back was astonishing:

How large do you think they’ll be? Email should be fine.

Seriously? These are the guys that we’re paying to verify that we’re properly protecting our systems and they’re suggesting that sending our usernames and password hashes via cleartext email is an appropriate method of file transfer. I respond back:

I’m not really concerned about the size of the files, but rather, the data that they contain. Sending files containing the users, groups, and password hashes for our financial systems via cleartext is probably not a good plan considering the point of this process is protecting that data.

And they respond with:

Whatever you’d like Josh. As long as you have the files as of today, we’re good.

So now I’m convinced that auditors (or at least these auditors) view security as nothing more than a checklist. The people telling me what I need to do in order to protect my systems really have no clue about the fundamentals of security. If it’s not on their checklist, then it must not be of importance. In this particular situation it may be easier or more convenient to send the documents via email, but any security professional worth their salt would tell you that’s not secure nor appropriate for that data. Either our auditors hold themselves to a very different standard than the rest of us security professionals or they just don’t understand security unless it’s on a checklist.

Looking for DevOps Stuff?

If you heard about us at Velocity 2010 and are coming here for sweet sweet DevOps and agile info,  we’ve moved to a different blog – come see us at the agile admin!

Our First DevOps Implementation

Although we’re currently engaged in a more radical agile infrastructure implementation, I thought I’d share our previous evolutionary DevOps implementation here (way before the term was coined, but in retrospect I think it hits a lot of the same notes) and what we learned along the way.

Here at NI we did what I’ll call a larval DevOps implementation starting about seven years ago when I came and took over our Web Systems team, essentially an applications administration/operations team for our Web site and other Web-related technologies.  There was zero automation and the model was very much “some developers show up with code and we had to put it in production and somehow deal with the many crashes per day resulting from it.”  We would get 100-200 on-call pages a week from things going wrong in production.  We had especially entertaining weeks where Belgian hackers would replace pages on our site with French translations of the Hacker’s Manifesto.  You know, standard Wild West stuff.  You’ve been there.

Step One: Partner With The Business

First thing I did (remember this is 2002), I partnered with the business leaders to get a “seat at the table” along with the development managers.  It turned out that our director of Web marketing was very open to the message of performance, availability, and security and gave us a lot of support.

This is an area where I think we’re still ahead of even a lot of the DevOps message.  Agile development carries a huge tenet about developers partnering side-by-side with “the business” (end users, domain experts, and whatnot).  DevOps is now talking about Ops partnering with developers, but in reality that’s a stab at the overall more successful model of “biz, dev, and ops all working together at once.” [Read the rest of this entry…]