So where does that put it? Well I backed up the official firmware and slapped esphome on it and it's up and running it looks correct to me. I think next up is figuring out how much power it eats, maybe some adjustment to average the two sensors on the device and honestly start pondering what needs to happen to run it up a pole in the yard.
I'll check in with my buddy who sent it to me, see if there was something on the PMS sensors. Honestly I've got extras maybe swapping them out is a plan?
So spent some time with the multimeter and mapping the board. It is unsurprisingly absolutely bog standard, nothing crazy here. It does have two extra connectors with noting plugged in. Smaller one is on the I2C bus, bigger one is maybe for another PMS, seems to only have one pin, maybe 2, run to it.
Pinout (IO numbers)
14 -
12 - PMS RST
13 - PMS TX
15 -
02 - PMS TX
00 - PMS RST
16 -
05 - BME SCL
04 - BME SDA
TXD and RXD come out through the USB port nicely. BME280 is on 0x76
hmmmm I might actually propose that as a patch to Fedora if it's not that way already... (though that bit me on Alma today)
It has been, again, 0 days since something that should be sane and logical has bitten me and caused excessive debugging and going "that should work..." due to systemd having some borked assumptions *sigh*
Today's? .link files are processed in sorted order, first match? Great that's good! 99-default.link processes before anything with [a-zA-Z] so yeah... that should really be ZZZZ-default.link I think or default made 'special' *sigh*
Ohhhhhhhh my this thing is... fast... like, holy cow stinking fast. It's somewhat surreal. I've got the ADATA ssds in raid1 as the boot / root drives and it just feels way faster than it should. I'm not sure if this is Alma 9 and just the way the whole system is compiled, or if this is just those SSDs are THAT fast (there's no way they should be)
Definitely need to start adding SSDs into the ceph pool down in the data center!
Soooooooo I *MIGHT* have bought a 1u half depth machine with 32GB of its own ram, 2 x 1TB m.2 nvme drives, recycle the dual nvme switch card that didn't work out, some extension cables to get a space for an optane drive, 2 x 3TB spinning rust, and I think there's space to sneak two more m.2 sata ssds in (I'll just pull from what I have lying around I think).
I *THINK* that will sort the i/o problem, and likely then some. Either way good experiment to test with.
2/2
So to do metrics / stats collection from this constelation of mirror servers, spun up a VM on my own setup to process and collect it. It's not bad, but the amount of data we are generating points to 2 things:
1) data aggregation over a time period needs to be done, say convert to 30s windows vs the 1 second accuracy I have now.
2) A running theme in this whole project: i/o bandwidth is the dominant problem.
1/2
You know what's awesome about @Mastodon today? The text input boxes aren't broken like they are on facepalm's Android app.
Some days it's the little things
Open Source BOFH. Amateur Radio: wa9hog