Skip navigation

Monthly Archives: April 2009

Chuck is too good of a show to go out with only 2 seasons, just as it starts to really fly creatively. It’s currently on the bubble between renewal and cancellation. The Subway campaign for the finale went swimmingly but we can’t stop there! Here are some ways you can help.

– Sign both these petitions to tell NBC we want it renewed
http://www.petitionspot.com/petitions/renewnbcchuck
http://www.youchoose.net/campaign/save_chuck_nbc_show

– Send a box of Nerds to NBC at:
NBC’s “Chuck”
Attn: Ben Silverman
3000 W Alameda Ave., Admin Bldg
Burbank, CA 91523

NBC’s “Chuck”
Attn: Angela Bromstad
100 Universal City Plaza, Bldg 1320E 4th Floor
Universal City, CA 91608

– Head over to Hulu and watch some episodes of Chuck even if you’ve already seen them all. Write a review for the show and make sure to rate the episodes (come on, five stars!)

– Stop by the NBC messageboards and post some messages showing your support for the show:
http://boards.nbc.com/nbc/index.php?showforum=90

– Here’s a form where you can send your own message to NBC:
http://effusiondesign.com/save-chuck.html

– Go to Digg and digg up all the Save Chuck articles you can find with the search feature. Search for things like ‘chuck renew’, ‘save chuck’, etc

– Buy the DVDs and merchandise from NBC’s site (the store works too but the decision will be made by Friday, the site gets the numbers to NBC quicker)

– Send letters to NBC (Chuck himself gives you some addresses in this post)

Chris Wilson over at Slate posted an article calling for a move away from CAPTCHA tests and towards algorithms that observe interaction with a webpage to verify your humanity.

The problem, of course, is that observing the user’s interactions requires Javascript. Javascript requires a browser, and spammers aren’t using browsers to fill out forms on the web, are they? Sure, if interaction-based verification systems were varied enough, it would easily hinder the average spammer’s ability to spam, but it would be quite trivial to overcome this.

No matter how you arrange it, in the end the spammer not only can read the Javascript code to see what’s going on, but he can mimick it’s responses to the server.

Let’s say this hypothetical spambot detector told the server it’s result in a hidden HTML field. Easy, spammer fills the field with the expected value. OK, let’s say the detector sent an AJAX call to the server once it was sure this wasn’t a spammer, but oh yeah the spammer can send AJAX calls too.

The reason CAPTCHAs are effective is because the server withholds a bit of information that the user must figure out from the image. With any sort of behavior-based (and thus user-side script-based) CAPTCHA, there really isn’t any information you can hold back. Worse yet, how would the spambot detector vary the situation for each unique page access? It couldn’t.

There’s a reason why the brightest minds in the CS field have been only tweaking the current model, because it’s the most effective (and possibly only) way to stop spam. Even if it’s only temporary. So I don’t think we’ll be able to get rid of “human checks” of some form or another any time soon.

That being said, the field has been shifting substantially toward knowledge based CAPTCHAs like the ones mentioned earlier in the article. Case in point is the forums for the open source game Cube, where you are asked a question like “what color is the sky?” Without being prepared for this particular question, a computer would not know the answer was “blue” unless it was capable of understanding the concept of a sky, as well as being familiar with the sky itself. With a sufficient number of questions chosen in a uniformly random way, this is a good way to stop spam. Also, the efficiency of this model continues to increase linearly with the amount of questions your system can ask: if you have 10,000 questions (and thus 10,000 correct answers), the spammer will need to prepare his bot to answer a great deal of these questions.

Furthermore, unlike letter-based CAPTCHAs, you can assume that a user is a bot with much less evidence. Honestly, assuming the user can read the language of the questions and that they are truly common knowledge questions, a user shouldn’t need to request a new question more than twice, right? So why not limit it to two. This would mean that out of those 10,000 questions in your system, the spammer would have to prepare 5,000 responses to 5,000 known questions to have a reasonable chance at getting through.

Naturally, this model depends on the secrecy of the questions. Any open source CAPTCHA system could be easily cracked since the spammer could just grep through the source to pull the questions and corresponding answers out of the code. This really is the downfall of all set-based (or non-random) CAPTCHAs.

The article also mentions something I hadn’t heard about before: using a hidden field to trick spambots into filling it, considering no human could fill a box they couldn’t actually get to. The author dismisses this too quickly. It would be trivial to come up with a system where the server randomizes the name and order of the hidden field and the genuine message field. The server would not indicate inside the web page which was the correct one, but would instead keep that information in session storage for each user. The main CSS file for the page would be dynamic, and would provide a rule matching the field indicated in the session information, such that the hidden style is only applied to the field which is supposed to be hidden per any given request.

This can be circumvented, but not without quite a bit of work. The spambot would have to parse and apply the CSS to the input fields to determine which ones are hidden. Go ahead and add some more layers of obfuscation, such as providing that rule in one of 3-4 different CSS files, then make the system capable of doing this with _any_ CSS file, and perhaps have multiple hidden fields and you’ve got a pretty tough system.

I think the *real* key to reducing illicit CAPTCHA solutions is to make the process as varied as possible among sites. Just as operating system monoculture promotes the spread of computer virii, CAPTCHA monoculture promotes the spread of illicit solutions.

Finished the upgrade after 3 runs of dist-upgrade to catch all stragglers. Rebooted to a VFS root panic, good thing the old kernels still worked. It appears update-grub forgets the initrd lines for the new kernels. Here’s a forum post that helped me solve the issue:

http://ubuntuforums.org/showthread.php?t=966939

OK so just about every cool new piece of Linux software is not available for Ubuntu’s Hardy Heron LTS (Long Term Release) so I am reluctantly pulling this massive 2 part upgrade. Now, this box has very limited hard drive space, about 7 GB for the main Linux install, an additional 20G drive or so for file storage, and a 11G Windows drive.

Ubuntu Hardy plus all the extra packages I’ve installed weighs in at just under 6 gigs, leaving too little headroom to go and download the Intrepid Ibex upgrade packages without some tricks.

I went ahead and moved my /var/cache/apt folder onto my storage drive and symlinked it into place. Apt was fine with this of course so I went ahead and launched the Kubuntu upgrade mechanism (opening Adept, fetch updates, “Version Upgrade”).

Now, I’ve never *ever* had one of these automated installers work properly, but it’s always been easy to continue the install process after it inevitably decides to abort because it failed once downloading a single package. It usually leaves the changed /etc/apt/sources.list and all the downloaded packages, so I just pop open a term and type ‘sudo apt-get dist-upgrade’. Upgrade process continues flawlessly until it’s done.

This time was a little more nerve-wracking. Apparently they “improved” the upgrade tool to immediately start reverting the system (after you click “OK” to the package download failure dialog) to the Hardy configuration (though they keep the downloaded packages for you). This happens without asking you whether you’d like to continue the process on your own.

Naturally, it would make sense for them to put a tiny itty bit of fucking sanity by retrying failed packages a few times, but instead they put more work into having it fail smoothly.

So it first reverted the Apt changes, THEN asked me if I was sure I wanted to abort the upgrade, you know, considering I never wanted to stop it in the first place. I did the smart thing and killed that fucker manually without clicking “No” to continue the upgrade so it wouldn’t revert anything else (maybe it would go back on it’s promise and delete the gigabyte of packages I just waited for 2 hours to download!!– not my connection, btw, Ubuntu’s archive servers start serving at full speed (500K/s) but after awhile it wobbles between 50K/s and about 1K/s).

So I had to go and manually fix my /etc/apt/sources.list, replacing ‘hardy’s with ‘intrepid’s all over. Finally, an apt-get update and dist-upgrade got the process rolling again.

This will be the last attempt to use these piece of shit auto-upgrade tools. When I upgrade from Ibex to Jaunty, it will be manual.

“Ubuntu is so great because of the six month release cycle!”

I’ll tell you the story of me trying to upgrade Mono to version 2.4 and KDE to 4.2.2 on my Ubuntu 8.04 LTS box. Well there actually isn’t a story unless you want to hear about me searching for answers that don’t exist. I could compile Mono if I really wanted to, but I really don’t. KDE, though, is a royal pain in the ass, just barely easier than building the new modularized X server packaging. No way.

It’s rather stupid that there don’t seem to be ANY packages available for a LONG TERM RELEASE, not even from third party or backports. These are major pieces of software which have very fast release cycles- they deserve to be supported on Canonical’s current flagship Ubuntu.

In the case of Mono only version 2.0 is supported in Ibex iirc, because Mono has been throwing out releases almost as if they were trying to catch up with something.

sigh Fuck it I guess I have to upgrade. Grr.