Tags

exploited

10:44am on Jul 09, 2014

Fun, I just got hit by this flaw in ElasticSearch.

always handle errors

8:31pm on May 03, 2014

I made this pull request but the author of the library thinks that not bothering to check HTTP status codes is acceptable.

So my code goes from:

1
2
3
request_token, request_token_secret = self.oauth.get_request_token(method="POST")
auth_token = self.oauth.get_access_token(request_token, request_token_secret,method="POST")
self.session = self.oauth.get_session(auth_token)

To:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from rauth.service import process_token_request
from rauth.utils import parse_utf8_qsl

rsp = self.oauth.get_raw_request_token(method="POST")
rsp.raise_for_status()
request_token, request_token_secret = process_token_request(rsp, parse_utf8_qsl, "oauth_token","oauth_token_secret")

rsp = self.oauth.get_raw_access_token(request_token, request_token_secret, method="POST")
rsp.raise_for_status()
auth_token = process_token_request(rsp, parse_utf8_qsl, "oauth_token", "oauth_token_secret")

self.session = self.oauth.get_session(auth_token)

It's not horrible, but really, why would you ever think it's OK to not handle errors?

bug or feature?

9:25pm on Apr 29, 2014

I've been writing an API for a little project I've been working on for a while, and in searching for a not-horrible way to do OAuth1 authentication, I actually found a Python library that doesn't suck.

Of course, it's not perfect. I noticed today that it doesn't actually handle HTTP error responses - it doesn't even check the return code at all, just assumes that any response it's given will be parseable. Which of course is not at all true in many cases - including in mine.

So of course I've forked it and am working on a fix.

you guessed it - another bug

12:37am on Apr 25, 2014

Found another bug and made a pull request - this time in the 'rauth' library, which does OAuth in a reasonable sane way.

Except for this issue - I still have no idea why they're trying to parse the OAuth response with a utility used for parsing HTTP requests, but hey, I guess if it works for them, fine.

For me though, I need to replace their use of parse_utf8_qsl(s) with json.loads(s.decode()) because my response is proper JSON - shouldn't OAuth responses be JSON anyway?

Whatever, it's late.

EDIT: Okay so it turns out I was doing silly things like not reading the OAuth spec and the response should be a query-string type thing like oauth_token=foo&oauth_token_secret=bar instead, which is what the library parses just fine by default. Reading specs is a good plan, one I encourage everyone to do.

My pull request is still valid though, if you really must break the spec, they have the parser argument already, and it should work in a more sensible way.

yet another bugfix

3:24pm on Apr 10, 2014

Another bugfix for s3cmd.

bugfixin

10:37am on Mar 25, 2014

Bugfix for s3cmd - some issues with command-line arguments not working when I needed them to.

RSS alert feed bot

5:28pm on Mar 11, 2014

Today I created a program to pull data from the RSS feeds our service vendors use for alerts, and either log, email, or instant message (we use Hipchat) to various support groups.

AND, I open sourced it on github. Enjoy!

rom

10:16pm on Feb 27, 2014

More python 3.3 porting, this time an interesting Redis ORM.

EDIT: Changed my mind, someone else has started a 3.3 port which looks like a way better method: mayfield/rom.

slow as molasses

1:43pm on Feb 27, 2014

Nearly 2 years to the day after I submitted this issue with the 'draw9patch' tool in the Android SDK to Google, the issue is still open.

I only mention this because it seems the owner of the ticket has changed today.

Way to go, Google.

mySQL to PostgreSQL data

4:19pm on Feb 11, 2014

I'm trying to pitch changing to PostgreSQL at work, so I had to figure this out today.

To export:

1
2
3
for i in table1 table2 ; do
  mysql --batch -e "SELECT * FROM $i" > $i.csv
done

To import:

1
2
3
4
5
6
7
8
9
for f in *.csv; do
  TABLE=${f%.*}
  tail -n +2 $f | \
  sed -e 's/\r/\\r/g' \
          -e 's/\\0//g' \
          -e 's/0000-00-00 00:00:00/NULL/g'| \
  iconv -t "utf-8" -f "utf-8" -c | \
  psql -c "COPY \"$TABLE\" FROM stdin WITH NULL 'NULL'"
done

Note the sed command to remove backslash-zero - as this is an escaped dump, that will be converted into a null character, which is not allowed in a string. Also one row I saw had the "zero date" shown there - pretty sure this date never happened, so I'm calling it 'NULL'.

[RSS] [atom]
Tags