Automation is my favorite form of procrastination, mostly due to the fact that I can turn it into something marketable at the last moment and pretend that I "being productive" the entire time. I usually find myself trying to automate some sub-process of some thing I rarely do when I have a lot of content writing I need to do (and very little patience for it).
But this is slightly different.
There's no way for me to market this.
What I'm On About: Using Gulp the Dumb Way
I'm using Gulp to make a lightweight local dashboard for semi-universal project tracking. While I like suites like Rainmeter and dashboards in general, the numbers that I pay attention to are either really simple or really fiddly. I never fully utilize pre-made solutions and I always find gaps, and I end up under-tracking instead of taking the time to
grep through all of my files and figure things out. I'm fussy about it.
I have no idea what the end product will look like. While I know enough HTML/CSS/JS to survive my day job, there's a huge difference between knowing enough front-end stuff to write project documentation and knowing enough front-end stuff to pull off a weird hacky solo project that deliberately uses things the wrong way.
I'll probably change technologies at least once. I haven't even finished this first update and I'm already thinking about diving deep into some
npm scripts in order to shift things over into "stupid things with bash scripts" territory for the intermediate bits. Hell, I might even invite
AHK to the party and add some foot-switch macros to it. I've done very little planning at this point.
This isn't going to be a how-to guide. While I'm really good at editing how-to guides (translating technical processes into user-friendly steps is, like, my thing), this is more of an exploratory process/article/series of rants. I'm taking a set of front-end skills I don't use every day and an automation tool that I barely use at all, and I'm bashing them together until something giggle-worthy falls out. It won't be pretty.
Seriously, Though, What?
I'm using Gulp to "export" data from a bunch of different projects on save, and then displaying that data on a locally hosted static (in the traditional sense) webpage that uses Gulp to put all of that random-ass asynchronous data where it needs to go. The goal is to create a sort of "universal project dashboard" that isn't resource intensive or reliant on technologies that I can't break, fix, and generally fuck with. I'm trying to reduce my reliance on mobile and cloud apps by building data-collection methods into my environment.
Here's a fun infographic:
I should note that this system doesn't provide a historical view of the data I'm (literally)
pipeing around. All it does is update a handful of variables on run. I could theoretically do some stuff with
SQLite and open the door to some fun
D3.js charts, but that isn't a first-release priority for me at the moment; I want to systematize cross-project scalability before diving into database stuff.
Although, now that I think about it, I could probably simplify what I'd be doing with Gulp if I skipped the hacky stuff and went straight to the "globally accessible local database" stage...
I'll get back to you on that.
The Road Ahead
I need a crystal-clear view of how I'll be parsing my project files; that'll influence the obstacles I'll face downstream and how I'll handle them. There's a wealth of documentation available for that (including some VSCode documentation on Language Servers that I've been meaning to dive into for a while now), and I'm feeling fairly confident in my ability to implement something that's conceptually efficient. Call it the refresher stage.
I'll have to weasel out the best way to integrate VSCode tasks into the system, but I should be able to figure that out as I test different parsing methods. My ideal implementation wouldn't be editor-dependent, but I'll give ground quickly if the end product is more scalable or flexible.
I'm deliberately procrastinating on the dashboard building stuff, as I'd rather not commit to a set of visuals without having a clear idea of what/how I'm delivering content.
Update 1: I'm Stupid
Alright, that didn't take long at all. In less than a week of mental-chewing, I've figured out a way to be a hell of a lot more efficient with all of this.
Let's start by looking at the things I want my end product to do:
- Register and de-register watch files and/or watch folders
- Identify the language of the registered files based on their file types
- Parse registered files using regular expressions
- Calculate the total word count in each registered file
- Transmit the word count information to another file
This shit's a breeze in Python.
How Python Actually Makes Things Easier For Once
Being a gosh darned hipster, I still don't use the Windows 10 launcher. I use Keypirinha. Why?
Because I can run scripts from it, using the plugin system to enumerate possible arguments (so I don't need to memorize everything).
Since Keypirinha plugins are written in Python, it makes for an ideal endpoint; I can handle the entirety of step 1 with a Keypirinha plugin without getting hacky. The first half of my step 2 implementation should also fit in here, as I'll already doing some basic string parsing in order to integrate the features into the Keypirinha usage cycle.
For steps 3 and 4, text parsing with Python is a cut and dried process. I won't be entering new territory, and I'll be able to leverage a huge collection of snippets and tutorials, which should allow me to deploy a prototype relatively quickly.
Step 5 is where things get fun, but I'll have to drill down into step 1 in order to model this properly.
Keypirinha, Python, and File-and-Folder Project Management: A Perfect Team
Yes, I really am going with a pirate joke. Deal with it.
With Keypirinha handling the CLI-ish end of things, the breakdown of what I have to make is simple:
With the Keypirinha stock plugins being open source, I can yoink and modify a large portion of their generic functions and focus on implementing the three big commands,
Check. Of the three,
Instantiate will be the most complicated one, as it'll handle the somewhat fussy process of creating our project files.
The project file model is a deliberate one. While I could integrate the majority of my functions into the Keypirinha plugin, I like the idea of a system with multiple deployment options and I'm keeping that door open for now. The Powershell+Python setup isn't elegant but it is forgiving, and it'll allow me to do a fair bit of lazy prototyping. The files-in-folders approach also exposes the operational structure of the dashboard, and it allows for a lot of per-project customization without making the initial workload any larger; everything down-pipe from Keypirinha is uncompiled, locally scoped, and easily modified.
External Inspiration: Doing things with 'Doing'
This line will eventually link to a separate article about Doing.
I've encountered a fundamental flaw in my "package managers and CLI installers a fucking awesome" productivity model: I'm perpetually digging into the documentation of programming languages I don't know. I'll have to silo myself soon in order to break the plateau on anything/everything.
The source of this these musing is a tool called Doing that's fucking awesome. In the developer's words, it is:
A command line tool for remembering what you were doing and tracking what you've done.
It's built with Ruby, the code is chock full of informative comments, it uses a Files-in-Folders system for per-project customization, and it's amazingly easy to use. The syntax is both easy to memorize and easy to intuit, and it's already worked its way into the guts of my work loop.
What does this have to do with autoMatey?
It's autoMatey's mirror image.
"Black Box" Tool Design
Productivity tools borrow a lot of control system logic. User-system interactions are modeled as open-loop systems, and the vast majority of the intermediate planning and feedback tools (Flowcharts! So! Many! Flowcharts!) naturally bias the UI/UX principles towards tight and immediate control loops. It's a mode of logic that's often effective and appropriate, but it isn't universally effective and appropriate.
Sometimes, tight and immediate feedback in a system that presupposes a need for active monitoring breaks the user's flow.
Sometimes, system interaction needs to be feedback-less, as the trigger and response system is wholly external to the tool.
Sometimes, user's need a "black box" tool that allows for a somewhat-arbitrary separation of the input and feedback loops in order to maintain their productivity silos.
Sometimes, I just want a tool that stays out of my fucking way.
Doing manages to stay out of my way quite nicely. It's just note-taking. Once it's finished, autoMatey will (hopefully) achieve a similar level of non-intrusive functionality on the output side of the black box model: I do (almost) nothing, it works anyways, I only see it when I go looking for it.
Planning-Stage Steps For Black Box Function
As I mentioned in the first chunk of this article, I have a fair bit of a planning I need to chew through before I can really pull together the working pieces of autoMatey. My initial concern was data structuring, as I felt that the what and how of the database would influence my downstream choices. I didn't explain that very well.
Here's an off-the-cuff table of the data that'll be sent in the initial SQL query from
Progress.db when it's triggered by the PowerShell script:
|autoMatey||Planning Article — Update 1.md||1246||Jan 30, 2018 10:34AM|
After writing that to
SELECT * FROM referenceTable WHERE... to grab the previous (washed, rinsed, verified) entry for that document in that project and pull the word count. Then we'll do our math, calculate the word count difference, and run through some logic steps for a few flags, and write the (now washed, rinsed, and verified) update to
Progress.db and end up with something like this:
|Project||Document||Word Count||Word Count Difference||Flag1||Flag2||Flag3||Date|
|autoMatey||Planning Article — Update 1.md||1246||964||Flag1 Value||Flag2 Value||Flag3 Value||Jan 30, 2018, 10:34AM|
I'm doing my flagging and analysis in
Do.py and writing the results to
Progress.db in order to separate the flag logic from the data visualization system. A significant update for a plaintext project is defined differently than a significant update for a programming project, but the styling and display rules for significant updates as a whole are universal; by setting the flag logic on a per-directory per-file-suffix basis I can maintain my silos and separate my development concerns.
This approach requires a locked-in data structure and it might not survive the iterative development cycle. By recording the flag values in the database, I need to be 100% confidence in my flag categories and flag count. If I decide that I need to change the flags down road, I'll have go down the funky SQL rabbit hole (to update the existing records to match the flag change), break my model (processing the changes in
Dashboard.js and turning this into a spaghetti code project), or eat the data loss (losing any hope of database interoperability).
I'm worried about those possibilities; more worried than I was when I initially visualized this project.