Thursday, August 6, 2015

Shock News: Trusted Sites Serve Malware in Ads



Yes, I know. We shouldn't really be particularly surprised that a legitimate site -
even one the size of Yahoo - has ended up mistakenly serving some form of badware through their advertising networks. It’s not the first time. Yahoo hit the headlines for malware related problems in 2014, when an affiliate traffic pushing scheme targeted Yahoo users with malware served through adverts on the Yahoo website, and now it’s happened again. 

Ad revenue on the Internet is hard to live on at the best of times, and we can expect "lowest cost" behaviours, including, but not limited to, fairly rudimentary checks on the intentions of advertisers.

The obvious thing to do here is to bleat on about the efficacy of having a web filter in fighting some of those attacks - you've read that before, hey, you may have even read it before from me. Fill in this section on your own, as an exercise for the reader.

You probably also know how important HTTPS interception is - of course, this malware was served over HTTPS, wouldn't want any pesky insecure mixed content now, would we? Again, I’ve expounded at length on the subject. No HTTPS scanning = no security. Don't accept "blacklists" of sites that get MITM scanned: the delivery site won't be on that list, and your malware sails on through free and easy.

The thing I want to mention today is the other big secret of content filtering: some web filters only apply the full gamut of their filtering prowess to sites that are not already in their blocklists. This is wonderful for performance. It might even mean you only need a single web filter to provide for a huge organisation - but when a "trusted" site, that's already "known" to the web filter, bypasses some of the content filtering in order to save a few CPU cycles you may be getting a false economy.

Tuesday, July 28, 2015

Happy Birthday - Smoothwall Celebrates 15 Years



Fifteen years ago, Lawrence Manning, a co-founder of Smoothwall, sat in his front room putting the final touches on a prototype for a special kind of software. 

This week, we
 spent some time catching up with Lawrence as he reflects on the 15 year progression of Smoothwall from a Open Source Linux project to the UK's number one web filter. 

SW: Where did the name Smoothwall come from?


LM: We had a couple of ideas for names. Since we were trying to popularize this through the Linux user groups, one of our ideas was to call it LUGWall. I’m glad we didn’t choose that! “SoHo” was a popular buzzword at the time, so we also had SoHo-Connect. And one of the other rejected names was WebbedWall, which I kind of like. The idea was also to have a “family” of projects one day, so we wanted a name that could be adapted. SmoothMail (email solution), and SmoothLinux which was going to be a desktop distribution based on Smoothwall ideas. Needless to say, nothing came of those ideas. There were rumours that the “Wall” part was named in honour of Larry Wall, the original author of the Perl programming language: the main language used in the project. I’m still not certain how much truth there is in this, but it’s a nice touch if it is true. Anyway, we went through a bunch of names and liked Smoothwall the best.

SW: What prompted you to start the first Open Source Smoothwall?

LM: The need for something to do! Not working at the time, I had energy to spend. And also the, maybe arrogant, belief that I could do something “better. There were alternatives around, not many, but some. Every one that we looked at was difficult to use, difficult to set up. The combination of those things was a pretty good driver.

SW: Why did you chose Open Source instead of Proprietary?

LM: Open Source is “free marketing”. I’m far from a believer that Open Source is the only way to make good software, but it is a great way to get people interested in what you are doing. In the early days of the project, I wrote all the code. But the fact it was Open Source (though it wasn’t run like a typical Open Source project) meant that people felt encouraged to tinker with it, and that led to ideas, and eventually code being contributed. This would not have happened if we’d kept the code closed; the interest just wouldn’t have been there.

SW: Why Linux?

LM: Well, there weren’t really any alternatives. I guess compared to the BSDs the driver support was better, but more than that, it was familiar. And we liked it of course. It was, and remains, the best platform for this kind of product, evidenced by the fact that everyone uses it in everything.

SW: What does it feel like to have invented a product that is responsible for 150 jobs?

LM: Obviously I’m very proud with what we have accomplished. What is especially gratifying, beyond the fact that we’ve created a company with, I believe it is right to say, a good ethical record, but also that it’s main business is keeping people safe.

SW: Did you imagine when you stated that Smoothwall would be where it is today?

LM: Nope! I honestly believed this thing would go on for about six months, and then I’d be forced back to Windows development work, with Smoothwall just being another little project to add to the list of little I’d worked on over the years.

SW: What's your favorite Star Trek character, or episode and why?

LM: 7 of 9? Actually it is Scotty. Series wise, The Original Series still stands the test of time. Within that series, I have too many favourite episodes to list. The newer stuff is good too of course, but you can’t beat TOS. Oh, and “Into Darkness” sucks!

SW: How did you meet George and Daniel?

George: I first met him at a motorway service station, near Exeter I think, to discuss commercial angles around Smoothwall. I was quite apprehensive because prior to it he’d sent me a big list of technical questions about Smoothwall, many of which I had no idea how to answer!

Daniel: Well, George headhunted him. Prior to actually meeting him I’d downloaded his DansGuardian software, which is basically what we wanted Daniel for, and played around with it, and of course had loads of questions. We got on great from the beginning, though I do remember being appalled with his first crack at a Guardian user interface!

SW: What's your best Smoothwall memory?

LM: There are many, of course. From a development point of view, I don’t believe I have ever been as productive as I was in the 3 months after the company was founded. In those 3 months I wrote the first versions of our VPN add-on (which is roughly what is sold today), a simple web filter module, and other things. Working only from one sentence requirements, on your own, having to design UIs yourself, having to actually get the thing to do what it has to do and having to test it all, is both intimidating and extremely rewarding. 

I remember writing the first version of a early add-on module called SmoothHost in this way, in an afternoon. Over the years we probably made a million pounds in revenue from that afternoon’s work. That kind of pure creative, seat of the pants way of working, I have to admit, I miss immensely.


Outside of the working environment, we’ve had some great company weekends. My favorite is probably the trip to Coniston in the Lake District. I think it was 2007. The company was still “innocent” then. It was a superb weekend.


Friday, June 12, 2015

Time For a Digital Detox or Better Filtering?



Being easily distracted has been a thorn in my side since Oldbury Park Primary School. I remember the day when mum and dad sat me down and read out my year 6 school report. Things were going so well, and then - boom - a comment from Mrs Horn that rained on my previously unsullied education record. ‘’Sarah can organize herself and her work quite competently if she wishes, but of late has been too easily distracted by those around her.” She had a point, but try telling that to a distraught eleven year who valued the opinion of her teachers. I made a vow after that. I would never let my report card be sullied again. Working on my concentration in secondary school and college helped me to pass my GCSEs and A-levels.


Then, when I entered the world of work I found an environment not too dissimilar to school. There were managers to impress, friends to win, and office politics instead of playground politics. Comme ci comm. But I was more informed this time, and found ways to stay focused: wearing headphones (a great way to show your otherwise engaged), meditation (limited to the park, never in the office), and writing to-do lists. But these are workplace tactics, if I were a student now, my report would probably be worse. I'd be lost with access to so many devices and so much time-wasting material.
So there, I’ve laid bare more than I should have, but I think my personal character assassination has been worth it, because it’s proved a point. Kids have always been distracted; tech has just made the problem worse. In addition to the usual classroom distractions, teachers now have to manage digital distractions, and it’s all affecting students’ progress.


For the head of the Old Hall School in Telford, Martin Stott, observing this trend was worrying. He said, “It seems to me that children’s ability to take on board the instructions for multi-step tasks has deteriorated. For a lot of children, all their conversation revolves around these games. It upsets me to see families in restaurants and as soon as they sit down the children get out their iPads.” Stott isn’t the first to raise the issue of digital dependency, (there are digital detox centers for adults who want to have a break from tech). He might, however, be the first to bring the issue to the education arena and get significant media coverage, by introducing a week’s digital embargo at his school. Students have to put away the Xboxes, iPads, and turn off the TV in an attempt to discover other activities like reading, board games and cards.

I’m split on the whole digital detox idea. The cynic asks how can a one week break to make any real change to the amount of time kids spend on devices. And restricting them completely is a sure fire way to spark rebellion. But my optimistic side says it’s a step in the right direction. It raises awareness by asking kids to realize that there’s life outside Minecraft and social media. Now that’s not so bad.

Nonetheless I do think that the problems with device dependency at Old Hall School could be solved with better filtering instead of a digital detox. As existing users will tell you, there’s a trusty little tool in our web filter known as ‘limit to quota’. Admins can configure the amount of time users can spend on different types of material, including material classified as time-wasting. According to predefined rules, users can use their allocation in bite-sized chunks, and be prompted every five or ten minutes, with an alert stating how much they’ve used. That way they’ll be no nasty shocks; when the timer eventually runs out after 60 minutes, they’ll be able to continue using the safe parts of the web that support their educational needs, without the distractions. Now that’s got to be more appealing than dropping the devices cold turkey, isn’t it?


Tuesday, June 2, 2015

It's no Fun Being Right All the Time

Last week, I finally got around to writing about HideMyAss, and doing a spot of speculation about how other proxy anonymizers earn their coin. Almost immediately I hit "publish" I spotted this article pop up on Zdnet. Apparently/allegedly, Hola subsidise their income by turning your machine into a part-time member of a botnet.
Normally, I really enjoy being proved right - ask my long suffering colleagues. In this case though, I'd rather the news wasn't quite so worrying. A bit of advertising, click hijacking and so forth is liveable. Malware? You can get rid... but a botnet client means you might be part of something illegal, and you'd never know the difference.

Thursday, May 28, 2015

"Hide My Ass" Comes Out of Hiding


The Internet has a chequered history with the humble ass. Kim Kardashian attempted to “break the Internet” with hers, and now we see VPN service “Hide My Ass” sold for
£40 million to AVG. This subscription driven VPN service is an interesting case study. Many VPN services are surprisingly coy about where they get their revenue, and about why they exist. HMA, on the other hand, are pretty up front: It was started as a way to bypass school filters, and it is subscription based. It’s nice to see the articles finally showing what we’ve long known - these services are, in the main, used for bypassing school or workplace filtering, and not only by oppressed revolutionaries in a far off land. Nor is Hide My Ass a way to avoid the long arm of the law, they have, in the past, given up users’ browsing details under court orders. What of other VPN providers - the “free” ones? Even subscription supported HMA admit freely they use affiliate marketing schemes to help keep the cost of plans down - what are the others doing to support the cost of bandwidth? Selling data, perhaps? For those with client software, they could be inspecting your secure connections! There’s even been cases where proxy/VPN software has inserted malware. Our advice - block ‘em all - and think twice if you are a user attempting to connect to a VPN service. Despite the name, and the youth of its creator, HMA is a pretty grown-up VPN system - the others, well - who knows?
 

Friday, May 15, 2015

Game of 72 Myth or Reality?

I can’t pretend that, in the mid 90s, I didn't pester my mum for a pair Adidas poppers joggers. Or that I didn't, against my better judgement, strut around in platform sneakers in an attempt to fit in with the in crowd. But emulating popular fashion was as far as I got. I don’t remember ever doing stupid or dangerous dares to impress my classmates. Initially, I thought, maybe I was just a good kid, but a quick straw poll around Smoothwall Towers, showed that my colleagues don’t recall hurting themselves or anyone else for a dare either. The closest example of a prank we could come up with between us was knock and run and egg and flour - hardly show stopping news.
But now, teenagers seem to be taking daring games to a whole new level through social media, challenging each other to do weird and even dangerous things. Like the #cinnamonchallenge on Twitter (where you dare someone to swallow a mouthful of cinnamon powder in 60 seconds without water). A quick visual check for the hashtag shows it’s still a thing today, despite initially going viral in 2013, and doctors having warned teens about the serious health implications. Now, apparently there’s another craze doing the rounds. #Gameof72 dares teens to go missing for 72 hours without contacting their parents. The first suspected case was reported in a local French newspaper in April, when a French student disappeared for three days and later told police she had been doing Game of 72. Then, in a separate incident, on 7 May, two schoolgirls from Essex went missing for a weekend in a suspected Game of 72 disappearance. Police later issued a statement to say the girls hadn't been playing the game. So why then, despite small incident numbers, and the absence of any actual evidence that Game of 72 is real, are parents and the authorities so panicked? Tricia Bailey from the Missing Children’s Society warned kids of the “immense and terrifying challenges they will face away from home.” And Stephen Fields, a communications coordinator at Windsor-Essex Catholic District School Board said, “it’s not cool”, and has warned students who participate that they could face suspension. It’s completely feasible that Game of 72 is actually a myth, created by a school kid with the intention of worrying the adults. And it’s worked; social media has made it seem even worse, when in reality, it’s probably not going to become an issue. I guess the truth is, we’ll probably never know, unless a savvy web filtering company finds a way of making these twitter-mobile games trackable at school, where peer pressure is often at its worst. Wait a minute...we already do that. Smoothwall allows school admins to block specific words and phrases including, Twitter hashtags. Say for instance that students were discussing Game of 72, or any other challenge, by tweet, and that phrase had been added to the list of banned words or phrases; the school’s administrator would be alerted, and their parents could be notified. Sure it won’t stop kids getting involved in online challenges, because they could take it to direct message and we’d lose the conversation. But, I think you’ll probably agree, the ability to track what students are saying in tweets is definitely a step in the right direction.

Wednesday, May 6, 2015

Bloxham Students Caught Buying Legal Highs at School


Bloxham Students Caught Buying Legal Highs at School


It’s true what they say: History repeats itself. This is especially true in the world of web security where tech-savvy students, with an inquisitive nature try to find loopholes in school filters to get to where they want to be or to what they want to buy.

Back in September we blogged about two high profile web filtering breaches in the US; highlighting the cases of Forest Grove and Glen Ellyn Elementary District. Both made the headlines because students had successfully circumvented web filtering controls.

Now the media spotlight is on Bloxham School in Oxfordshire, England, after pupils were caught ordering legal highs from their dorms. See what I mean about history repeating itself? Okay, so the cases aren’t identical, but there is a unifying element. The Forest Grove student was found looking at erotica on Wattpad, students from Glen Ellyn students were caught looking at pornography, and at Bloxham it’s “legal” highs. The unifying factor in all three cases is that they were facilitated by a failure in the school’s web filter. 

The difficulty, though, is working out what exactly went wrong with Bloxham’s filter, because none of the details surrounding the technicalities have been announced. Were students allowed access to website selling recreational drugs, or was there an oversight on the part of the web filtering management? In the original story broken by the Times, a teenage pupil was reported to have been expelled, and other students disciplined following an investigation by the school which found they had been on said websites.

Without knowing the details, it is probably wrong to speculate, however, i’m going to do it anyway! It’s entirely possible Bloxham chose a more corporate focussed web filter. In a corporate environment, “legal" highs may not present as much of an issue as in an education setting. With a strong focus on education, Smoothwall’s content filter has always been good at picking up these types of site. This is aided by the real-time content filter not reliant on a domain list, as these sites are always on the edge of the law, and move rapidly. Because the law is different depending upon where you live - and, indeed, rapidly changing regarding these substances, Smoothwall doesn’t attempt to differentiate between the grey area of “legal highs” and those recreational substances on the other side of the law. All of them come under the “drugs” category. This gives a solid message across all age ranges, geographies and cultures: it’s best not to take chances with your health!

Wednesday, April 22, 2015

A new option to stem the tide of nefarious Twitter images...

Smoothwall's team of intrepid web-wranglers have recently noticed a change in Twitter's behaviour. Where once, it was impossible to differentiate the resources loaded from twimg.com, Twitter now includes some handy sub-domains so we can differentiate the optional user-uploaded images from the CSS , buttons, etc.

This means it's possible to prevent twitter loading user-content images without doing HTTPS inspection - something that's a bit of a broad brush, but given the fairly hefty amount of adult content swilling around Twitter, it's far from being the worst idea!

Smoothwall users: Twitter images are considered "unmoderated image hosting" - if you had previously made some changes to unblock CSS and JS from twimg, you can probably remove those now.

Tuesday, March 31, 2015

Pukka Firewall Lessons from Jamie Oliver

Pukka Firewall Lessons from Jamie Oliver

In our office I’m willing to bet that food is discussed on average three times a day. Monday mornings will be spent waxing lyrical about the culinary masterpiece we’ve managed to prepare over the weekend. Then at around 11 someone will say, “Where are we going for lunch?” Before going home that evening, maybe there’s a question about the latest eatery in town. 

I expect your office chit chat is not too dissimilar to ours, because food and what we do with it has skyrocketed in popularity over the past few years. Cookery programmes like Jamie Oliver's 30 minute meals, the Great British Bake-off and Masterchef have been a big influence. 

Our food obsession, however, might be putting us all at risk, and I don’t just mean from an expanded waistline. Cyber criminals appear to have turned their attention to the food industry, targeting Jamie Oliver’s website with malware. This is the second time that malware has been found on site. News originally broke back in February, and the problem was thought to have been resolved. Then, following a routine site inspection on the 13th of March, webmasters found that the malware had returned or had never actually been completely removed. 

It’s no surprise that cyber criminals have associated themselves with Jamie Oliver, since they’ve been leeching on pop culture and celebrities for years. Back in 2008, typing a star’s name into a search engine and straying away from the official sites was a sure fire way to get malware. Now it seems they’ve cut out the middleman, going straight to the source. This malware was planted directly onto JamieOliver.com.

Apart from bad press, Jamie Oliver has come away unscathed. Nobody has been seriously affected and the situation could have been much worse had the malware got into an organisational network. 

Even with no real damage there’s an important lesson to be learned. Keep your firewall up to date so it can identify nefarious code contained within web pages or applications. If such code tries to execute itself on your machine, a good firewall will identify this as malware.

Wednesday, March 18, 2015

5 Important Lessons from the Judges Who Were Caught Watching Porn


5 Important Lessons from the judges who were caught watching porn

I've never been in court before or stood in a witness box, and I hope I never do. If I am, however, called before a judge, I’d expect him or her to be donning a funny wig and a gown, to be above average intelligence, and to judge my case fairly according to the law of the land. What I would not expect is for that judge to be indulging while in the office, as these District Judges have done. Four of Her Majesty’s finest have been caught watching porn on judicial owned IT equipment. While, the material didn't contain illegal content or child images, it’s easy to see why the case has attracted so much media attention. I mean, it’s the kind of behaviour you would expect from a group of lads on a stag, not from a District Judge! Now the shoe is on the other foot, and questions will be asked about how a porn culture was allowed to develop at the highest levels of justice. Poor web usage controls and lack of communication were more than likely to blame. But speculation aside, the world may have passed the point where opportunity can remain unrestricted to allow things like this to happen. Employees, especially those in high positions, are more vulnerable and need protection. So here are 5 important lessons on web filtering from 4 District Judges: 1. Know Your Organisational Risk – The highest levels of staff pose the highest risk to the organisation. Failures on their part risk the credibility of the whole organisation. 2. Recognise Individual Risk – While not always the case, veteran leadership may be the least computer literate and risk stumbling into ill-advised territory accidentally. 3. Communicate with Staff – Notification of acceptable use policies can go a long way to getting everyone on the same page and help with legal recourse when bad things do happen. 4. Be Proactive – Use a web filter for what’s not acceptable instead of leaving that subject matter open to traffic. If you still want to give your staff some flexibility, try out a limit-to-quota feature. 5. Trust No One (Blindly) – Today’s internet environment makes a blind, trust-based relationship foolish. There is simply too much shady stuff out there and much of it is cleverly disguised. If there is anyone out there who’s reading and thinking, this would never happen in my organisation; my staff would never do that, think again, my friend. Nobody is perfect; the ability to look at inappropriate content knows no bounds, including the heights of hierarchy. We’re all potential infringers, as proved by Judges Timothy Bowles, Warren Grant, Peter Bullock and Andrew Maw.

Thursday, March 5, 2015

Statement: Smoothwall and the "FREAK" Vulnerability

In light of the recent "FREAK" vulnerability, in which web servers and web browsers can be cajoled into using older, more vulnerable ciphers in encrypted communications, we would like to assure customers that the web server configuration on an up-to-date Smoothwall system is not vulnerable to this attack.

Similarly, if you are using "HTTPS Decrypt & Inspect" in Smoothwall, your clients' browsers will afforded some protection from attack, as their traffic will be re-encrypted by the web filter, which does not support downgrading to these "Export Grade" ciphers.

Wednesday, March 4, 2015

Searching Safely When HTTPS is Mandatory

Searching Safely when HTTPS is Mandatory


Nobody wants anyone looking at their search history. I get it. I mean, look at mine  —oh wait, don't—that's quite embarrassing. Those were for a friend, honestly.

Fortunately for us, it's pretty difficult to dig into someone's search history. Google even forces you to log in again before you can view it in its entirety. Most search engines now encrypt our traffic by default, too —some even using HSTS to make sure our browsers always go secure. This is great news for consumers, and means our privacy is protected (with the noticeable exception of the search provider, who knows everything and owns your life, but that's another story).

This all comes a little unstuck though - sometimes we want to be able to see inside searches. In a web filtered environment it is really useful to be able to do this. Not just in schools where it's important to prevent searches for online games during lessons, but also in the corporate world where, at the very least, it would be prudent to cut out searches for pornographic terms. It's not that difficult to come up with a handful of search terms that give potentially embarrassing image results.

So, how can we prevent users running wild with search engines? The first option is to secure all HTTPS traffic with "decrypt and inspect" type technology —your Smoothwall can do this, but you will need to distribute a certificate to all who want to use your network to browse the web. This certificate tells the browser: "trust this organisation to look at my secure traffic and do the right thing". This will get all the bells and whistles we were used to in the halcyon days of HTTP: SafeSearch, thumbnail blocking, and search term filtering and reporting.

Full decryption isn't as easy when the device in question is user-owned. The alternative option here is to force SafeSearch (Google let us do this without decrypting HTTPS) but it does leave you at their mercy in terms of SafeSearch. This will block anything that's considered porn, but will leave a fair chunk of "adult" content and doesn't intend to cover subjects such as gambling —or indeed online games. You won't be able to report on any of this either, of course.

Some people ask "can we redirect to the HTTP site" - this is a "downgrade attack", and exactly what modern browsers will spot, and prevent us from doing. We also get asked "can we resolve DNS differently, and send secure traffic to a server we have the cert for?" - well, yes, you can, but the browser will spot this too. You won't get a certificate for "google.com", and that's where the browser thinks it is going, so that's where it expects the certificate to be for.

In conclusion: ideally, you MITM or you force Google's SafeSearch & block access to other search engines. For more information read our whitepaper: 'The Risks of Secure Google Search'. It examines the problems associated with mandatory Google HTTPS searches, and suggests methods which can be used to remedy these issues.

Tuesday, February 24, 2015

Twitter - Den of Iniquity or Paragon of Virtue... or Someplace in Between?


Twitter - Den of Iniquity or Paragon of Virtue or Someplace in Between


Recently there's been some coverage of Twitter's propensity for porn. Some research has shown that
one in every thousand tweets contains something pornographic. With 8662 tweets purportedly sent every second, that's quite a lot.

Now, this is not something that has escaped our notice here at Smoothwall HQ. We like to help our customers keep the web clean and tidy for their users, and mostly that means free of porn. With Twitter that's particularly difficult. Their filtering isn't easy to enforce and, while we have had some reasonable results with a combination of search term filtering and stripping certain tweets based on content, it's still not optimal. Twitter does not enforce content marking and 140 characters is right on the cusp of being impossible to content filter.

That said - how porn riddled is Twitter? Is there really sex round every corner? Is that little blue bird a pervert? Well, what we've found is: it's all relative.

Twitter is certainly among the more gutter variety of social networks, with Tumblr giving it a decent run for boobs-per-square-inch, but the likes of Facebook are much cleaner — with even images of breastfeeding mothers causing some controversy.

Interestingly, however, our back-of-a-beermat research leads us to believe that about 40 in every 1000 websites is in some way linked to porn — these numbers come from checking a quarter of a million of the most popular sites through Smoothwall's web filter and seeing what gets tagged as porn. Meanwhile, the Huffington Post reports that 30% of all Internet traffic is porn - the biggest number thus far. However, given the tendency of porn toward video, I guess we shouldn't be shocked.

Twitter: hard to filter, relatively porn-rich social network which is only doing its best to mirror the makeup of the Internet at large. As a school network admin, I would have it blocked for sure: Twitter themselves used to suggest a minimum age of 13, though this requirement quietly went away in a recent update to their terms of service.

Friday, January 30, 2015

Plausible Deniability - The Impact of Crypto Law

So, after the recent terror attacks in Paris, the UK suffered from the usual knee-jerk reactions from the technologically-challenged chaps we have governing us. “Let’s ban encryption the Government can’t crack”, they say. Many people mocked this, saying that terrorists were flouting laws anyway, so why would they obey the rules on crypto? How would companies that rely on crypto do business in the UK (that’s everyone, by the way)?


Well, I’m not going to dwell on those points, because I am rather late to the party in writing this piece, and because those points are boring :) In any case, if the Internet went all plaintext on us, web filtering would be a whole lot easier, and Smoothwall’s HTTPS features wouldn’t be quite so popular!


If the real intent of the law is to be able to arrest someone just for having, or sending encrypted data - the equivalent of arresting someone for looking funny (or stepping on the cracks in pavements). What would our miscreants do next?


Well, the idea we need to explore is “plausible deniability”. For example, you are a De Niro-esque mafia enforcer. You need to carry a baseball bat, for the commission of your illicit  work. If you want to be able to fool the local law enforcement, you might also carry a baseball. “i’m going to play baseball, officer” (may not go down well at 3 in the morning when you have a corpse in the back seat of your car, but it’s a start). You conceal your weapon among things that help it look normal. It is possible conceal the cryptography “weapon” so that law enforcement can’t see it’s there so they can’t arrest anyone. Is it possible to say “sorry officer, no AES256 here, just a picture of a kitteh”? If so, you have plausible deniability.

What’s the crypto equivalent? Steganography. The idea of hiding a message inside other data, such that it is very hard to prove a hidden message is there at all. Here’s an example:



This image of a slightly irritated looking cat in a shoebox contains a short message. It will be very hard to find, because the original image is only on my harddisk, so you have nothing to compare to. There are many steganographic methods for hiding the text, and it is extremely short by comparison to the image. If I had encrypted the text… well, you would find it even harder, because you couldn’t even look for words. It is left as an exercise for the reader to tell me in a comment what the message is.