The last time I shared one of my assignments with you, I received a lot of positive feedback on it, so I decided to do it again. If you haven’t read “The Cuckoo’s Egg,” I highly recommend it; however, it’s not required to proceed. Sure, there may be a few details here and there that aren’t 100% clear, but you’ll get the gist of it.

So, without further ado, here’s the paper I submitted for the digital forensics class I took last semester …

Imagine living in an apartment complex where someone had stolen the keys from the superintendent and was using them to go into your home and look around when you weren’t there. Then, one day, you realized someone had been there because he drank a soda and left the can behind, sitting on the countertop. 

Now imagine that you told everyone — your friends and family, neighbors, the building manager, the police — and no one seemed to care or think it was anything to be concerned about because all he did was drink a pop. He didn’t take anything valuable, and, heck, you don’t even technically own anything that valuable for him to take in the first place. 

That’s essentially what happened to Cliff Stoll during the investigation he recounts in “The Cuckoo’s Egg,” only it wasn’t a stranger breaking into his apartment — it was an international spy hacking into the Lawrence Berkley National Lab’s computer network, which gave him access to the military network too.

Sure, technology has advanced exponentially since 1986 when Stoll originally looked into the 75 cent discrepancy that started his wild goose chase, and it should, seemingly, be way easier to catch a hacker nowadays. But, the key word there is “seemingly.” You see, the main issue back then wasn’t that they didn’t have the technology they needed. In fact, even with the myriad of tools we have available at our fingertips, the biggest challenge Stoll faced is still looming over our heads today … cybersecurity is inherently reactive by nature, and we’ve yet to figure out how to clear the hurdle that stands in the way of a proactive approach.

So, let’s start at the beginning of the story. While it might make sense for employees to be moved around to different departments in some situations, this was not one of them. 

Stoll was an astronomer — he had no place on the IT team unless he were properly trained prior to working on his own, which he wasn’t. With a proactive cybersecurity approach in place, this would have never happened because the risk assessment would have identified ways he could cause complex problems while trying to troubleshoot/solve simple ones. 

Furthermore, there was no real chain of command, which is something that doesn’t require any technology at all. Had someone actually been responsible for the safety of the network, then this may not have happened in the first place, or, it could have been less impactful. 

Additionally, there were still several technologies available then that simply weren’t employed as a result of the blasé attitude most people involved had toward cybersecurity.

For example, hashing was available. But, even though everyone in the department at Lawrence Berkley National Lab knew there was someone snooping around their computers, the files were never hashed in order for them to verify integrity. Their only defense in ensuring the hacker didn’t alter their data was to hope he didn’t.

They did use hashing for the passwords, but employees weren’t required to use strong passwords. That, combined with the fact that they never changed the administrative password and just kept using the same one over and over, meant hashing was essentially useless. Even though it’s unclear whether or not the hacker had a table of known hashes, he very easily could have because Stoll and his team never thought of that as a possibility until after they knew he was breaking in (again, a reactive realization). And, it would have worked well since they just used typical words. 

It’s hard to say what their biggest mistake was because there were so many. But, it’s possible that the No. 1 oversight they made was in not deleting inactive accounts. It may not have left the door wide open for the hacker — he still had to break in in the first place and replace the hash in the password file in order to login disguised as the “legitimate” user — but it definitely left it unlocked. The method he used wouldn’t have worked on active users because they would have reported to IT that their password wasn’t working. And, since the passwords were hashed, the IT team would have done exactly what the hacker did — replace the hash value in the password file to reset it for the user. At that point in time, the hacker’s password would no longer work, and this cycle would continue until the hacker found a different way in or they discovered him. Since the inactive accounts were still available but weren’t being monitored, he pretty much had free rein. 

Reference monitors were available and being used at Lawrence Berkley National Lab at the time. Unfortunately, they weren’t using them in the most optimal way. Had they been, the 75 cent accounting discrepancy that led to Stoll’s year-and-a-half long search operation could have been a nonissue from the beginning. Since the new automated system was installed, a number of things could have happened. A silent reference monitor could have refrained from creating the new user because an accounting profile wasn’t attached. For example, if the attacker entered the command for a new user, there is certain information that is required: username, password, employee ID, etc. If an accounting profile wasn’t attached, then the new user information would be purged when the reference monitor kicked him out. With it being silent, the attacker wouldn’t know what was going wrong, and he would have to keep trying different combinations of commands until he got it right, making success much less likely. Also, there could have been extra steps required to manually add a user, since the automated system was preferred. An example measure could be requiring two users with admin privileges to validate — this is a system that dates back many years and has been used in several industries/scenarios to prevent mistakes as well as security breaches. Though the hacker could have gained access to two admin accounts, it would have made things more complicated for him, and it probably would have taken him a long time to figure out what the problem was with a silent reference monitor. 

Yet another tool they had at their fingertips was the ability to monitor. At one point in time, Stoll was receiving alerts on his pager when the attacker was logged on. So, they could have had something set up where the IT department was notified whenever there was some sort of unusual activity (user manually added, inactive user logged on, user logged on outside of normal hours, etc.).

When it comes to management, they had a lot of weak links in their chain. For starters, there was no auditing process. They should have had to do some checks on a periodic basis (whatever they determined was best for their operations). These, say, quarterly audits could have included things like making sure the admin password was changed to something new and difficult to guess, verifying no inactive users existed, checking for abnormal activity (this was possible, as they ran a report to determine the average time the attacker logged on and the average amount of time he remained active), performing software updates, performing pen testing, etc. 

Aside from the fact that there were no notifications to verify new user accounts with the actual system admins, this is something that could have been caught with an audit. Even still, when Stoll mentioned to one of the department heads about the new account for Hunter that didn’t have an accounting profile, he simply assumed one of the other managers created the account and just blew it off. It was Stoll who took it upon himself to verify the validity of the account.

The lack of communication and protocol among the managers and department heads is a detrimental oversight. 

There were other minor clues/hints they overlooked as well that could have made a difference. They knew — or, at least, assumed correctly — that the hacker wasn’t from the West Coast based on his use of the “g flag,” which wasn’t used in Berkley Unix. But, when the trace showed the connection took longer than expected, it was just sort off brushed off as being strange. Though the timestamps alone weren’t enough to inform them of the attacker’s location, this information combined with the trace could have helped them discover earlier that the spy was breaking in from another country. After all, that is pretty much how they came to that realization anyway but not until someone outside of the team suggested it (which shows the importance of having a diverse workforce).

Surely, there are tools available now that were born of situations like this one, and so they weren’t available then. Programs, like Wireshark, tcpdump, and traceroute; RSA tokens; digital signatures; public and private keys; multifactor authentication; etc., could be used today to prevent such attacks from occurring, make it easier to catch attackers, or protect data during an attack.

Simply having a tool like traceoute could have made it obvious that they were watching someone from Germany break into the Lawrence Berkley National Lab computers. And, once it became an international affair, the investigation took a turn in the right direction. Knowing this upfront could have accelerated things tremendously, especially in the beginning. 

But, despite the fact that we have more technologically advanced tools available today, we also still have hackers, and cybercrime is at an all-time high. So, technology isn’t the one and only solution to cybercrime.

Too many people are preoccupied with more pressing issues to worry about cybersecurity until they’re already under attack. “A software update to patch vulnerabilities? Oh, I’ll get to that later.” “Use unique passwords? Why, so I’ll forget them all and have to reset them every time I log in? No, thanks.”

We’ve all witnessed other people and companies using poor cyber hygiene, and most of us are even guilty of it ourselves. We like to think that because nothing has happened yet, nothing will happen. But, the rise in cyberattacks says we’re wrong about that.

Stoll’s story may seem like an organizational issue, but it went way beyond Lawrence Berkley National Lab. Not only was the attacker able to get onto the military network, but the FBI was very much uninterested in the case for most of the time it proceeded. 

Since hackers will try anything to get in, it’s imperative that IT and cybersecurity professionals do the same and try every possible way to keep them out — not just covering the holes hackers have already dug up. With that being said, it’s not so cut-and-dry as to whether or not Stoll and the team at Lawrence Berkley National Lab had the right idea by allowing the hacker to snoop around, or if they should have been more like the guys over at Dockmaster and shut him off as soon as they got wind of an intruder. The thing is, sometimes people try the easy, obvious way first. In this case, it was the GNU Emacs bug. No doubt, it was a known vulnerability, but hackers rely on people being too busy or too lazy to perform system updates and vulnerability patches (and, you can see why). So, if they would have caught him right away and kicked him out, does that mean he would have given up? Probably not. He would have kept trying, and, most likely, he would have gotten in at some point — maybe through a less discoverable, more effective means. Maybe he would have been able to go unnoticed for longer. Maybe he would have done a lot more damage. 

The argument about what was the right move there could probably go either way. But, one thing’s for sure: They didn’t do anything to protect their data or their network while he was poking around. Setting up the fake documents was a smart move, but, again, they did nothing to protect the real documents — not even so much as to change the “public” setting to “private.”

Stoll was the only one in this situation who seemed to think that simply having a stranger snoop around your house was creepy enough itself to be a concern regardless of whether or not something was stolen.

But, we all know that most experienced criminals don’t just go in for the kill — they case the joint first. So, why do we still wait until something happens to react?

The cuckoo bird that flew into the Lawrence Berkley National Lab computers taught us about cybersecurity. But, it’s almost 40 years later, and we’re still fighting hackers. That’s not going to change much no matter how much time passes. We can try to fix it with technology, but, with new technology comes new problems. What’s more important than new, advanced defense tools is a new, proactive defense strategy that starts with putting safety and security above ego and convenience.