Tags

bunch of links

00:15, January 31, 2018

Couple of links to stuff I’ve been working with recently.

arrow

A really nice Python date library.

packagetracker

A fork of another Python library I tried to contribute to, but that the author has basically abandoned.

IEX Developer Platform

Rather nice web API for obtaining real-time stock quotes.

Envoy

Full-featured and highly-performant distributed reverse proxy engine. Not sure how I missed this one before… nor why I’m still using Amazon’s ELBs now.

bottom

A decent, lightweight Python asyncio IRC client library. AsyncIO all the things!

rebuilding things

23:05, January 30, 2018

Recently I decided it’s about time to retire the ancient server that has been running my website (and mail and dns and a bunch of other things) - It’s a Northwood Pentium 4 Xeon from 2002, the kind with Hyperthreading. Running Linux 3.7.10. 32-bit.

Yes, that’s old.

Rather than upgrade, I’m probably moving all of this somewhere else, likely a cheap VPS somewhere… and since there’s a whole bunch of really old Perl running things here… I’ve been doing some rewriting.

Static webpage generation

The original website I put on this current server used a backend based on some unspeakable perl, but the most recent was using Pylons… and by “recent” I mean circa 2010. So yeah, that’s got to be redone. I pondered another web application, but decided that rather than doing on-the-fly content generation from templates, why not just pre-generate it?

So that’s what I’m doing. I’ve started building a bunch of Python scripts using, Jinja templates, and Markdown for blog content. Naturally, all run by Makefiles.

Let’s Encrypt

Since I’m doing it all new, I’m finally going to go full SSL, via Let’s Encrypt. Yeah, about time.

So yeah. Stay tuned.

always handle errors

21:31, May 3, 2014

I made this pull request but the author of the library thinks that not bothering to check HTTP status codes is acceptable.

So my code goes from:

request_token, request_token_secret = self.oauth.get_request_token(method="POST")
auth_token = self.oauth.get_access_token(request_token, request_token_secret,method="POST")
self.session = self.oauth.get_session(auth_token)

To:

from rauth.service import process_token_request
from rauth.utils import parse_utf8_qsl

rsp = self.oauth.get_raw_request_token(method="POST")
rsp.raise_for_status()
request_token, request_token_secret = process_token_request(
    rsp,
    parse_utf8_qsl,
    "oauth_token",
    "oauth_token_secret")

rsp = self.oauth.get_raw_access_token(request_token, request_token_secret, method="POST")
rsp.raise_for_status()
auth_token = process_token_request(rsp, parse_utf8_qsl, "oauth_token", "oauth_token_secret")
self.session = self.oauth.get_session(auth_token)

It’s not horrible, but really, why would you ever think it’s OK to not handle errors?

bug or feature?

22:25, April 29, 2014

I’ve been writing an API for a little project I’ve been working on for a while, and in searching for a not-horrible way to do OAuth1 authentication, I actually found a Python library that doesn’t suck.

Of course, it’s not perfect. I noticed today that it doesn’t actually handle HTTP error responses - it doesn’t even check the return code at all, just assumes that any response it’s given will be parseable. Which of course is not at all true in many cases - including in mine.

So of course I’ve forked it and am working on a fix.

you guessed it - another bug

01:37, April 25, 2014

Found another bug and made a pull request - this time in the ‘rauth’ library, which does OAuth in a reasonable sane way.

Except for this issue - I still have no idea why they’re trying to parse the OAuth response with a utility used for parsing HTTP requests, but hey, I guess if it works for them, fine. For me though, I need to replace their use of parse_utf8_qsl(s) with json.loads(s.decode()) because my response is proper JSON - shouldn’t OAuth responses be JSON anyway?

Whatever, it’s late.

EDIT: Okay so it turns out I was doing silly things like not reading the OAuth spec and the response should be a query-string type thing like oauth_token=foo&oauth_token_secret=bar instead, which is what the library parses just fine by default. Reading specs is a good plan, one I encourage everyone to do.

My pull request is still valid though, if you really must break the spec, they have the parser argument already, and it should work in a more sensible way.

yet another bugfix

16:24, April 10, 2014

Another bugfix for s3cmd.

bugfixin

11:37, March 25, 2014

Bugfix for s3cmd - some issues with command-line arguments not working when I needed them to.

RSS alert feed bot

18:28, March 11, 2014

Today I created a program to pull data from the RSS feeds our service vendors use for alerts, and either log, email, or instant message (we use Hipchat) to various support groups.

AND, I open sourced it on github. Enjoy!

rom

22:16, February 27, 2014

More python 3.3 porting, this time an interesting Redis ORM.

EDIT: Changed my mind, someone else has started a 3.3 port which looks like a way better method: mayfield/rom.

Google you little...

22:16, October 28, 2013

I see the Google plus article format returned by their Python API has changed again. You will note the sidebar over on the right there only shows images and no articles now. I’m getting really tired of fixing this every month.

Probably I’ll just not bother soon, and remove that whole sidebar altogether.

whoa python iterators buffer

14:01, June 21, 2013

So I’ve been using this code in a few programs at work:

p = subprocess.Popen(...)
for line in p.stdout.readline():
    ...
    print(line)

It turns out there’s a bunch of output buffering going on here. You could put a sys.stdout.flush() after that print, but it won’t help.

The iterator buffers. Do this:

p = subprocess.Popen(...)
while True:
  line = p.stdout.readline()
  if not line:
      break
    ---
    print(line)

Et violà! No buffering.

[RSS]