Security Has to Be Job #1

It’s been a busy week for security administrators. First, the Lizamoon attack last week. Then, data was stolen from Epsilon.

LizaMoon Attack

I first started reading about Lizamoon last Friday (4/1/2011). I was wondering if it was an April Fool’s joke but quickly discovered it was not. Depending upon what estimates you read, Lizamoon infected between 100,000 and more than 1,000,000 pages. In my opinion, the worst part of this attack is that it should have been prevented.

According to WebSense:

We were able to find more information about the SQL Injection itself (thanks Peter) and the command is par for the course when it comes to SQL Injections.

While the attackers did escape many of the characters to make it difficult to decipher what they were doing, SQL injection attacks are easy to prevent:

  1. Run your web application under an id with limited privileges.
  2. Use parameters in all SQL statements. This alone would have prevented this attack.

I found this attack disappointing. I remember reading about SQL injection attacks more than 5 years ago along with how to prevent them. How can we have come so far as an industry but still have so far yet to go?

Epsilon Attack

Sometime last week, Epsilon was attacked and had data stolen. I first heard about this when I received an email from BestBuy Reward Zone which included:

On March 31, we were informed by Epsilon, a company we use to send emails to our customers, that files containing the email addresses of some Best Buy customers were accessed without authorization.

We have been assured by Epsilon that the only information that may have been obtained was your email address and that the accessed files did not include any other information. A rigorous assessment by Epsilon determined that no other information is at risk. We are actively investigating to confirm this.

I received a similar email from Chase today. And, according to Engadget, more companies than these two had their data stolen:

A rigorous investigation has concluded that no other personal data was exposed, however it’s not just TiVo that’s affected — other big names, such as JPMorgan Chase, Citi, US Bank, Kroger, and Walgreens have also seen their users’ deets dished out to the unidentified intruder.

The good news is that only names and email addresses were stolen; there was no information stolen that would lead to monetary or identify theft.

But think about what this will do to Epsilon itself. They haven’t published how the hackers got in and stole the data and I could argue that they shouldn’t.

But hackers did break into their systems and steal data. They had to issue a press release stating as much. And, the impacted companies are blaming Epsilon for the security breach as they should. I wonder how much this will hurt Epsilon’s reputation and market share.

So, What’s Next?

The Lizamoon attack was caused by developers not accounting for SQL injection attacks in their code.

We don’t know what was the underlying cause of the Epsilon attack. From the press release and the apology emails being sent out, I think it’s safe to assume it was an exploit as opposed to a social engineering attack. That means it was probably either a developer-created security hole or operations falling behind on patching servers (or both).

So how do we prevent these types of attacks?

First, I think we as developers need to take more ownership of these types of problems and do more to prevent them. The problem with this is that we’ve heard this all before. And many of us do take ownership and work hard to insure our code doesn’t have any exploits. But a lot of us don’t.

Second, we need to educate our management so that security gets taken seriously. If we do find an exploit in existing code, fixing that exploit has to be a top priority. I bet that isn’t always the case. The problem is the risk and that is, I believe, the key to getting management to increase the priority of fixing exploits; by talking about what would happen if the exploit is found.

Third, we need better tools to help us determine if our websites are secure. Most of the tools that I’m aware of do code analysis and are looking for exploits within our code. That doesn’t help us in these cases because the exploits are in how our web site is working and not in how our code is working. Hopefully some good tools will come along shortly and will help us in this area.

>c# and Brackets

>c# uses a lot of brackets. But there are times when you don’t need brackets; specifically when you only have one statement to execute in the loop/if statement. You have to use brackets when you have multiple statements inside the loop/if.

I do think though, there are times when you should use brackets even though the language doesn’t explicitly call for them.

Specifically, when I have nested loops/if statements, I use brackets on all but the innermost structure even though these aren’t required. The c# language reference for if-else uses the following example:

int x = 12;
int y = 18;

if (x > 10)
if (y > 20)
Console.Write("Statement_1");
else
Console.Write("Statement_2");

I would never write that; I would automatically add brackets around the outermost if:
int x = 12;
int y = 18;

if (x > 10)

{
if (y > 20)
Console.Write("Statement_1");
else
Console.Write("Statement_2");
}

Why? I think the first structure is more error-prone. Also, the simple addition or removal of an else can imply one thing while another thing is happening. For example, what if the first block was rewritten this way:

int x = 12;
int y = 18;

if (x > 10)
if (y > 20)
Console.Write("Statement_1");
else
Console.Write("Statement_2");

This makes it appear that the else is associated with the first if statement but we know it’s not. Visual Studio’s automatic formatting should fix the formatting eventually. But until it does, the visual appearance doesn’t reflect how the code runs. And that can cause bugs that are hard to find. So I will continue to always put in the brackets because there is nothing worse than bugs that can hide in plain site.

>Shutting Things Down

>This isn’t a post about technology or about development; it is about the emotional toll getting laid off can take. As you can probably guess, I’m getting laid off a the end of December. Having been lucky enough to have gotten through more than 20 years without being laid off, I’m surprised how much this is weighing on me emotionally.

In addition to the lay offs, the company is transitioning to what I’d describe as a holding company at the same time. As of January 1, it will no longer have any staff or conduct any business. From an IT perspective, this means turning off all IT services and systems at the end of the month.

I have known this change was coming for a few months now. I thought that know would make dealing with this easier. I was wrong. When I’m up late at night, I start thinking about all of the work we did over the past eight years and how it must have no value. After all, if it had value, wouldn’t it be kept?

I do know this isn’t the case. The work we did provided tremendous value. It allowed us to compete with much larger players and let us force some positive changes in our industry. That’s something we need to be proud of.

I also know that the decision to shut down operations and lay off all staff was not a personal one. The company is changing because it had to. We were at a point where competing was more difficult and more expensive. Something had to change.

A while back, I read (sorry, I’m paraphrasing because I can’t find the source for this quote – if you know the source, please let me know):

Behind every business decision that forces personnel changes are people feeling the personal impacts of that decision.

In other words, the fact that I am losing my job makes the business decision personal to me. In an odd way, this makes me feel better; it validates what I’m feeling and makes me believe what I’m feeling is normal.

I know I will get through this and will come out the other side a stronger and better person. But it is not a fun road to go through.

>User Interfaces Are Complicated

>I’ve been doing some work on a food diary site of mine. One of the items I capture is the time food was eaten. I never thought capturing time in a user interface was so difficult until I started to work it.

My first step was to figure out what I needed to capture. I decided I didn’t need an exact time; rather, an approximate time would be good enough (within a 15 or 30 minute window). I looked for a jQuery plug-in to do this. I found some that used drop downs to capture hours, minutes and seconds. I found some that used spinners. I didn’t like any of those.

I found a couple that provide a type of drop-down (more like an auto-complete than a true drop-down) and I liked that approach. But none of them were quite what I was looking for so I spun my own. So far, it’s okay but I still need to do some tweaking on it.

I decided to make a mobile web version of the site. After doing some research, I decided to create the mobile version using jQuery mobile. It’s feature set is pretty cool and it seems rather stable even though it is only an alpha release.

Then I got to time entry. For my control, I display a scrollable Div below a textbox so that time can be typed in or selected. When my scrollable Div displays in phone browsers, it displays but it doesn’t scroll. Plus, given the assumptions I made about data entry, the approach really doesn’t work for phone/touch based browsers. For example, when you hit tab or click on the next field, the scrollable Div autohides. But, who clicks tab on a phone? And because of the window size, it’s hard to click on the next textbox. So the approach that works decently on a computer browser, really doesn’t fit the mobile browser.

For now, I pulled back on the mobile version of my site. While jQuery Mobile is really slick, there are a few too many things missing. Though I did decide that, when I’m ready, I’ll do the date and time with spinners like Android does it natively (separate text boxes for each entry item with the up and down arrows above and below the text box respectively.

It’s amazing how complicated a single user interface element can become. 

>Passwords Becoming History?

So I’ve published before about passwords. Now this whole notion of passwords changing is going mainstream. Why do I say that? An article about passwords going away made it into Consumerist.  While I don’t think face scanning will necessarily be the security of the future, something will be. Personally, I think that a two-factor authentication method is the most likely option for web sites. For Windows, I’m not sure what it will be. I’m also not sure it matters because if somebody gets a hold of your machine, access is a pretty easy thing.

>New Phone

>I just got a new Android Phone, the T-Mobile G2. And I love it. It’s fast, it’s responsive, and the download speeds are incredibly fast (for a phone) and the phone. The phone is a little on the heavy side but the phone feels so solid, the weight doesn’t bother me. In fact, I would say this is one of the best “feeling” elecontric devices I’ve had in years.

This phone replaces my T-Mobile G1 that I’ve had for close to 2 years. The G1 was nice but it was getting long the tooth. I was disappointed when they didn’t push Android 2.1 to the G1. And, it was starting to feel really slow with some of the applications I use like the Google Navigation app.

So, the first question that comes up is why didn’t I get an iPhone. And there are two primary reasons for that. First, I don’t really want to back to AT&T. I was with AT&T for years, originally with AT&T Wireless, and their custom service had gotten to the point where I thought it was terrible. It is what made me switch to T-Mobile (who seems to have some of the best customer service in wireless around). That plus a family plan similar to my T-Mobile plan would cost me a bit more per month. My second reason for no iPhone is that I hate iTunes and don’t want to install that best on my computer.

I am not anti-Apple though. I believe that Apple with the iPhone has taken the user experience to a new, higher level than it was at previously.  And that has forced changes at other manufacturers that have been all cell phones better. I’m guessing that the iPad will have a similar effect on the netbook market.

But back to my G2. I knew I wanted to another Android phone and with T-Mobile, I had a few to chose from. For me, it came down to two phones: the G2 and the Samsung Vibrant (the T-Mobile Galaxy S phone). And there were a few things that made me select the G2 over the Vibrant:

  1. The G2 is running Android 2.2 today; the Vibrant is still 2.,1.
  2. The G2 uses the new HSPA+ connection giving 4G connectivity speeds.
  3. The G2 is a pretty vanilla Android install (which is closer to what I was looking for); the Vibrant includes the Samsung Touchwiz interface. One problem I see with custom interfaces is that they slow  down Android updates to the phone (which is why I believe the Vibrant is still Android 2.1).
That said, there are a few things I wish the G2 had:
  1. More than 4GB of built-in flash memory (with only 1.2GB available – what happened to the rest).
  2. The ability to uninstall some of the pre-installed Google App’s. For example, Google Goggles and Google Earth are cool app’s that I don’t see myself using on my phone. But I cannot uninstall them.
And there are a few really cool features that I get to take advantage of now because of upgrading to a G2:
  1. Chrome to Phone – This is a WOW feature. I look up an address in maps.google.com, click the Chrome to Phone button and , presto, the map shows up on my phone where, with a simple click, I can use it in the Google Navigation App. Very cool.
  2. The email, calendar and contact integration with Exchange now exists and is fantastic. On my G1 I had to use a 3rd party app. With my G2, I setup the Exchange server as an email, and everything just automatically integrated. 
  3. The performance and responsiveness of this phone is phenomenal. It responds to touch instantly and everything opens very quickly. Yes, it is “only” an 800MHz chip instead of the 1GHz chips in a lot of other phones (like the iPhone 4 and Samsung Vibrant) but it is also a next generation chip. And most of the comparisons I’ve seen between the G2 and the Nexus One, running Android 2.2 with the 1GHz chip, have the G2 being the faster phone.
Overall, I couldn’t be happier with my choice though I’m sure some new phone will come out in another couple of months that will make me wish I had waited. 😉

>Passwords & Security

>I’m surprised by how many sites and IT departments continue to force users to change their passwords every 30, 60, 90, 180 days. I find this practice annoying and wonder why everybody thinks this is a good idea. And why this is still considered a best practice.

There are now more opinions to back up my thoughts:

But, in spite of this, many IT systems still believe that changing your password every 90 days or so makes things more secure.

Don’t get me wrong, security is important. It needs to be job one in every application that stores anything about me and in every IT department. Protecting my data is very important to me and I don’t want to do business with a company that doesn’t believe security is important.

I do believe that you are more secure with a longer password. And I would rather have a long password than be forced to change my password every 90 days. The problem is that sites make the determination for me by forcing me to change my password. Since long passwords are harder to come up with and remember, I end up with shorter passwords because I take the path of least resistance.

Why am I so worried? In a 3year old post, Jeff Atwood, taking about a specific password cracking program in his Coding Horror blog says, “this attack covered 99.9% of all possible 14 character alphanumeric passwords in 11 minutes”. The problem is only getting worse. Some of the new cracker programs take advantage of the massive amount of processing power in the nVidia graphics processor chips cutting the time it takes to crack passwords by 60% or more.

Yet I’m still forced to change my password on some sites. So I go with shorter passwords because coming up with longer passwords is difficult and I don’t want to do that every 3 months. Why is it so hard to get IT people, including myself at times, to acknowledge how security risks have changed and to change our behaviors? And to change “best practices”?

>Resumes & the Job Search

>Unfortunately, a few weeks ago, I found out my current employer is closing it’s doors at the end of the year. So yesterday I got to go to outplacement training offered by my employer. I want to be fully prepared to start a job search and thought this training could help. I have to admit though, I wasn’t looking forward to it. The training was scheduled for a full day and I didn’t know how much I would get out of it. Ultimately I decided to go since I thought my resume could stand some improvement.

Now, I am very happy I went. Things have changed a lot in the world of resumes and job applications since I did my last job search. I also now believe my resume needed more work than I thought. In fact, I don’t think my resume would have gotten me past the first hurdle for most jobs.
All I can say is that if you are laid off or looking for a job for any reason, get some up-to-date help on putting a resume together. What I thought I knew about job searching and what I had found on the web hadn’t really prepared me for what I learned from the training. And right now, I’m thinking that the training was correct.

>HTML 5

>So I started looking at some of the new things available with HTML 5 by going through the slideshow found at Html5Rocks.  There’s some pretty cool stuff coming.  I don’t think it is that far away.  But it is definitely not immediate.  To prove that, just go to the site with IE.  The main HTML5Rocks site works in IE but the slideshow doesn’t.

The slideshow should work fine in IE once IE9 comes out.  But why do we have to wait?  Mozilla and Google managed to update their browsers to start supporting HTML 5.  Why can’t Microsoft do the same?  Why must we wait for IE9 (and then for adoption of IE9) before we can start taking advantage of many of the features of HTML5?  Seems to me that a position like that just slows down adoption of these great capabilities which will make development easier.  Making it even more ironic is that word is already starting to get out about how to add support for HTML5 to ASP.NET MVC (http://www.deanhume.com/Home/BlogPost/asp-net-mvc-html5-toolkit/29).

But I digress; I meant for this post to be about the new capabilities coming, not about Microsoft taking so long to support them 🙂

So, here are my favorite features coming with HTML5:

  • The new input types for date, time, email, color, number.  Yes, many of these aren’t yet supported.  And the richest support is only in Chrome.  But when these come, it will be nice.  It will make validation easier and it will make things more consistent across web sites.  At least, once everybody starts accommodating the new types.
  • Easy, easy ways to add audio and video to your site with the new audio and video tags.  
  • The new CSS selectors will make lot’s of things easier.  And no more jQuery to get different backgrounds on alternate table rows!
  • The TextStroke, Opacity, Rounded Corners, shadows and Gradient support is fantastic. Especially the gradient support.  I always hated the “hack” of using a 1px wide image to get a gradient color in a background.
  • I can see the local storage changing the game a lot.  Today, if I have a multi-page “wizard”, I have to send and save the data on the server between pages.  With this capability, I could store the data locally and send it once when the user has hit a Save button making my process use less bandwidth and be a bit more crash proof.

I see no reason not to start using these features today.  Modernizr is a javaScript library that lets you know what features are and aren’t available in the current browser.  It extends that knowledge into CSS as well.  For example, if you want to have one background style if gradients are supported and a different one if they are not, you can code:

    .cssgradients { /* Gradients supported by browser */
    }

    .no-cssgradients { /* gradients not supported by browser*/
    }
    This does mean some extra work when creating your scripts and your styles.  But it also means that you can take advantage of these new features and gracefully degrade them as necessary.  So the question becomes is the extra effort worth it?
    I think it is.  

    >Spam and Social Engineering

    >I continue to be amazed not by the quantity of spam but by the social engineering aspects and how well it seems to work.  And how we tend to treat those people.

    In my full-time job, part of my responsibilities are providing desktop support (we are a small shop so we all have a lot of roles).  In that role, I’ve seen how well some of these spam and nasty emails seem to work.  For example:

    • We’ve seen a lot of “fake” retail invoices going out.  I’ve had people click on the links contained in those emails which take advantage of some IE holes and install some nasty software.  I’m personally surprised that the emails work even though there are issues with the email that make me spot it as a fake almost instantly. 
    • We’ve had a few emails arrive talking about us being in violation of copyright.  The email is “sent” from a real law firm.  But again, the content of the email make me believe it is a fake almost instantly.  This email, in fact, has been a big enough problem that the law firm had to put a message on it’s website letting people know that they did not send the copyright violation email.

    These instances got me thinking.  How am I able to spot these fakes but many other people can’t?  Granted I am a much more sophisticated computer user than most.  But why when I see the issues I think it is fake and many other people don’t draw the same conclusion.

    For example, many of these emails were sent to an email address that didn’t match the name in the message.  For example, Jane Public would receive an email that was addressed to John Smith.  To me, this mismatch says to me “fake”.  But John Smith sees this and sends it to Jane Public because he is worried her order has a problem and she won’t know about it otherwise.

    So, why do these types of emails work?  And what can we do to make them not work as well?

    We’ve all given the “be suspicious of emails” talk.  Everybody has heard not to click on links in emails you don’t recognize.  So the spammers get around this by sending emails from places people do recognize.  When the email is from a place people do business with, many people will overlook minor issues and believe the email is legitimate.

    How can we change the tools to help people identify a legitimate email from Amazon versus the fake?  The spam filters don’t catch them, at least not right away.  The mail programs display the email as legitimate.  The email looks legitimate.  But it’s not.  And the tools do nothing to help people identify these as fake.

    We in IT don’t help the situation when we blame the user for clicking on these links.  We act like the people that click on these links don’t listen or don’t understand when we tell them how diligent they need to be.  If this problem was caused by another person instead of an email, we’d call the person who fell for the plea too trusting or gullible.  So why do we deride these people for believing an email that looks legitimate?