Elance Fail

I signed up for elance a few years ago when I was looking for work and thought of doing a little consulting. I never really saw any decent paying work on their site and really kinda forgot about them until I received this e-mail today.

Dear xxxx,

We recently learned that certain Elance user information was accessed without authorization, including potentially yours. The data accessed was contact information — specifically name, email address, telephone number, city location and Elance login information (passwords were protected with encryption). This incident did NOT involve any credit card, bank account, social security or tax ID numbers.

We have remedied the cause of the breach and are working with appropriate authorities. We have also implemented additional security measures and have strengthened password requirements to protect all of our users.

We sincerely regret any inconvenience or disruption this may cause.

If you have any unanswered questions and for ongoing information about this matter, please visit this page in our Trust & Safety center: http://www.elance.com/p/trust/account_security.html

For information on re-setting your password, visit: http://help.elance.com/forums/30969/entries/47262

Thank you for your understanding,

Michael Culver
Vice President
Elance

I commend them for going public with this rather embarrassing story, a lot of companies hide these events. However, I’m annoyed that all these companies that retain this kind of data really only care about security after this kind of thing happens.

My Subaru battery keeps the site alive

On Sept 29th 2003 hurricane Juan struck the atlantic provinces of Canada. At the time I was living in PEI and telecommuting to Montreal for sysadmin work at an e-commerce company. By the time the storm struck Charlottetown it had been downgraded to a tropical storm but still packed a serious punch. I spent a couple hours out exploring and taking pictures having never witnessed such an event before. By the time I got back to my apartment at 2am the power was out so I went to bed.

When I awoke in the morning the power was still out, not an ideal situation for a telecommuter. Of course, not 10 minutes after getting up I got a call complaining about problems with the website. No power = no high speed internet so I fired up the laptop and connected to a dialup account.

This worked for about an hour until my crappy Tecra 8200 battery started to give out. At this point I had diagnosed the problem but another hour or two was needed to get the site fully operational again. At this point I knew I needed to find some power. I remembered that I had an old 250 watt 12V DC to AC inverter. Instead of working in the car I opted to take the battery in the house.

At this point I had my laptop charging off my Subaru’s battery. I began to wonder if my high speed provider had a generator at the head end. I plugged my cable modem in and presto, high speed. I started to get cocky and even plugged in my old 13 inch color TV and satellite receiver! It all worked perfectly but after 10 minutes of TV I opted to just run the laptop and cable modem to have more run time.

To gauge my run time I put my multimeter on the battery to keep an eye on the voltage. Every 5-6 hours I would have to take the battery out to the car and go for a 30-40 minute drive to charge it up. It was totally inefficient and ridiculous but it worked.

IBM Bladecenter “Can not read power-on VPD for blade”

Spent two hours at the DC Friday night trying to install some new blades in our bladecenter. Every time I installed a new blade the power light kept flashing rapidly. I logged into the web management for the bladecenter and saw the event log entry “Can not read power-on VPD for blade”. Because the bladecenter would not read the hardware VPD from the blade it would not allow it to power on.

After speaking with IBM support we decided to update the firmware of the AMM (advanced management module) in the bladecenter. When I tried to update the firmware it gave me an error saying that it could not install the update. After 45 mins of trial and error with the IBM engineer (great support BTW, 1000 times better than dell!) we decided to reboot the AMM. (this had no affect on the running blades btw).

After a reboot everything worked fine! I was able to install the 6 new blades and update the AMM firmware to the latest release. It was strange because all the other functions of the AMM seemed to be working fine, the already installed blades were communicating with the AMM without issue. I would suggest that anyone who runs into this issue should try to reboot the AMM first, as it’s a fairly low impact thing to do and it would have saved me an hour if I had thought to do it from the start.

Free Recursive DNS for your pc or network

Yesterday I stumbled on a website that provides free recursive DNS. It seems to work quite well and is nice an fast. They even offer common spelling mistake auto-correction and phishing prevention. The website states they make a profit by offering targeted search results to end users when they enter an unresolvable domain name. Neat idea. I wish domain squatters did this instead, it allows them to not just get some typo’s that they think of but ALL typos.

Free caching DNS servers are a great idea, it allows people who don’t have decent DNS servers provided by their ISP’s to have reliable DNS. Also, those who run their own caching nameservers can use them as forwarders to reduce the strain on the root servers.

What is it about Friday afternoons anyway?

Today at 3:35pm a couple of critical servers went off line at the datacenter with no warning or explanation. Of course this has to happen at the end of the day on a Friday! After 5 seconds of bewilderment I called the NOC and had someone head out to the cage to take a look at what was going on. One of our power strips just decided to shut off without warning. Most boxes just generated alerts about loosing a power supply but 2 machines were older and did not have dual power supplies so they simply went offline. We moved these boxes over to another power strip and waited for the facilities guys to come and diagnose the problem.

After removing the power strip’s locking plug from under the raised tiles and testing the power source they determined that the issue was local to the power strip. Upon plugging it back in the power strip started working again. The power eng figured that the new power deviation we had installed last week may have knocked the plug loose so we left the power strip in place and decided to wait.

An hour later again the circuit went down. This time our NOC people decided to replace the troubled power strip and now everything appears stable. Flakey power makes me nervous, especially in a mission critical environment.