accotron devlog for 2022W47
2022-11-26 by Evrim Ă–ztamur

accotron devlog for 2022W47

Calmer week with a bit more focus than the last one. It’s hard to find space these days! Had a strong start to the week with a good amount of progress on accotron.

Reading Asimov’s first Foundation book after coming across it on the bookstore (and having heard about it for the hundredth time).

Sessions and OAuth2 authentication for Xero

This week started out with some progress on supporting the OpenID workflow for Xero (or Sign in/Sign up with Xero, in their branded terms). Sign in flow is connected to a failover with sign up if there is no known user account, and that follows a regular session cookie process.

The sign in process requests the reduced scopes openid profile email and checks for the Xero user UUID against the local database entries, if no user is found, a new user is already created with the details and recorded. Sign up is actually just to record the optional fields for full name and correspondence email for the user.

Writing about this, I can already tell that I should probably add a ‘sign up on’ timestamp to the user entry to fall over to the same sign up flow. Currently you could repeat the OAuth2 process ‘illegally’ using the full scope list to bypass including these details. That’s one for the TODO.txt!

Migration to Postgres

I started out by using SQLite because I didn’t have Postgres set up on the box I’m writing this from. However, I intended to eventually migrate to Postgres regardless.

There has been some funny translation issues, such as COALESCE not working in the same way it does in SQLite and requires all coalesced columns to be included in the GROUP BY. Not sure why? Or string_agg in Postgres versus group_concat in SQLite.

SQLAlchemy has made the process relatively painless, on the other hand. Postgres also led to me rewriting most of the data classes, into ones abiding their type restrictions.

SQLite doesn’t care about your types

A funny one was me declaring the amounts to be integers, recording them as floats (as I retrieve them from the Xero API), and SQLite happily ignoring it all. That led to some other adjustments on how numbers are retrieved and displayed. The arithmetic remains the same.

Chain report

One thing I wanted to get ready was the chain report, which is a report that lists the journals related to a certain Xero resource (e.g. an invoice). I am working on a design that makes it clear how the journals tie into each other. There are certain challenges in data processing, such as annotating the ‘reason’ for a journal entry.

For example, a journal that’s the inverse (i.e. credits and debits flipped for every line) of another one in the past can be either one to revert the balance of a previous one to allow for a modification, or simply be the deletion of the resource itself. Declaring whether this inverse journal is for a modification or deletion requires you to walk the journal ‘chain.’

I am trying to find a good way to perform this process and ’tag’ the journals appropriately. Visualisation of these journals alongside the ’edit history’ entries for a given resource (available for most but not all resources) is another challenge that’s more of a UX one.

More for this week

I am writing this before the week’s end, and I still have some more things I want to squeeze into this weekend. Namely, there is a bubbling issue with the distinction of ‘database rows’ and ‘data classes.’

I had issues (in previous projects) with a lot of application logic seeping into the storage-level modules (database read-write operations, queries to obtain data for reports, etc.), and I had a separation between the two. I am not sure about which is better, an all-encompassing ORM-style class that also contains application logic, or two distinct ones with conversion methods available on both.

I would not have much of an issue with the latter if Python had better type controls available and it was apparent on sight which class a certain method is using, but type checking still feels like it’s in its infancy. In turn, I am leaning towards merging the two.

Another item on the list is importing resource data into the database for non-journal resources. As I head toward preparing income statement and balance sheet reports, and annotating journals with more metadata, it’s going to be handy to have all these resources cached and ready. I think that a simple (ID, type, API response in JSON) triple will suffice.

For the next week

Granted I finish the work for this week, I will be working on the new TODO item of repeating and validating the sign in/sign up flow to check for logical inconsistencies. Working further on the chain report to create a better overview of the chain’s history is probably going to be the highlight, with better metadata annotations for all linked resources too.