Follow

While I've got a big 🧵 going on Twitter for the mirror.fcix.net project @warthog9 and I are working on twitter.com/KWF/status/1509276

I figured Mastodon would be a fitting place to put the thread for the other half of the same project, where John and I are building a fleet of "Micro Mirrors" to help distributions continue to operate for free.

So we're building mirrors and then passing the hat around to fund this silly little project in exchange for entertainment. paypal.me/kennethfinnegan

🧵

· · Web · 1 · 3 · 8

As part of the MFN (mirror.fcix.net) project, John built an amazing Grafana/influx telemetry system that takes every HTTP request in the logs and parses it out into what project, which release, what ASN, and how many bytes the request was, which is giving us an amazing level of visibility into what's going on with that large software mirror.

This lead to the realization that I'm now able to calculate what I'm calling the "CDN efficiency" of every project, which is the number of bytes served per day divided by the number of bytes used on disk.

Look up the day's "bytes served" and divide by the output of "du -h -d 1 /data/mirror"
twitter.com/KWF/status/1510028

And we get numbers like the following:
centos 9.02
manjaro 2.55
epel 1.97
rocky 0.32
fedora 0.23
centos-altarch 0.21
almalinux 0.20
centos-stream 0.11

So for example, the 1.97 CDN efficiency for EPEL means that for the 269GB used on our mirror to host that project, we're serving 1.97 times that (530GB) per day.

Which lead to the realization that even though we're using almost 40TB on MFN to host projects, something like 60-70% of our actual served bandwidth is coming from only the most popular 3TB worth of files.

So while these big heavy-iron mirrors are needed and valuable for a foundational capacity and the long tail of projects, what if we added more mirroring capacity and distribution by building additional mirrors which are:
* smaller
* cheaper
* easier to host

And scatter them around in more places on the Internet to combat the consolidation that seems to be pervasively happening across all strata of the Internet?

Enter the concept of the Micro Mirror. github.com/PhirePhly/micromirr

What if we built a tiny Micro Mirror that only had 2TB of storage in it and only served a few of the smallest and most popular projects?

Not as a replacement for the monolithic heavy iron mirrors, but as a way to take some of the hottest load off of them so the heavy iron mirrors can spend more of their time doing what they're good at, which is serving the long tail of the less popular content?

So for our initial proof of concept, we're using the HP T620 thin client as our base, since these are cheap and plentiful on eBay, low power, fanless, and I literally stock a pile of them in my apartment for projects where I need servers with N→0 horsepower.

Two DDR3 SODIMM slots, an M.2 SATA slot, an mPCIe card, a 1GbaseT NIC, and it uses <15W of power from a 19V soap on a rope power supply.

2x4GB RAM sticks and a 2TB WD Blue SSD, and we're able to put together the whole box for <$250

So for less than the cost of a single of the 16TB hard drives used in MFN, we're able to build a Micro Mirror.

The big question that we're working to answer now is whether this is a more or less effective use of resources to support Linux software distribution.

Build another monolith vs build 10 more micro mirrors to shed load off the existing monoliths.

Initial benchmarks on the T620 hardware look very good. It can read and write to the SSD at 3-4Gbps, so the amount of RAM in the MicroMirror beyond "enough to operate the control plane" doesn't matter anywhere near as much as on a spinning rust mirror, because shoveling content from SSD to 1Gbps Ethernet is effectively free.

The 4 core GX-415 CPU in these thin clients is able to handle serving line rate small HTTPS requests out the NIC without even maxing a single core.

Which I think is both a testament to how freaking powerful 15W TDP CPUs have gotten and how battle hardened and optimized software stacks like Nginx are for performance.

Not usable as an end user desktop running a web browser, but able to handle updates for a million servers per day. 🙄

So this seems encouraging as the basis for a hardware platform that meets our needs:
1. cheap
2. low power
3. easy to transport
4. Small enough that colocating it won't be a burden for hosts

So we can now build the hardware and manage the instance, and for each server we just need to find an ISP with 1Gbps of surplus egress transit bandwidth and the lack of manpower/interest in putting in the effort to manage their own server.

They give us an IP address, space, power, and then can just forget about this thing because it's smol and in the absolute worst case is only pumping out 1Gbps of what can be treated as entirely scavenger class traffic below their customer data.

So our initial alpha deployment is sitting two racks down from my AS7034 Phirephly Design network in Fremont and hosting six projects on a 2TB SSD. codingflyboy.mm.fcix.net/

We're still working on the cosmetics of it so it doesn't look like a default Nginx install; turns out the autoindex formatting mangles .xml files getting served from disk 🤦

To keep it interesting, we've added the rule that we only deploy one micromirror per building, so HE FMT2 gets checked off the list.

CodingFlyBoy is now live for epel, opensuse, and arch linux, putting it at 3/6 projects live.

My gut feeling is that the sweet spot for these micro mirrors is going to be about 100Mbps of baseline load, so we have 900Mbps of burst capacity for individual requests / something happening like a release day or a CVE.

So if we end up with this thing running with its NIC at >20% capacity ALL the time, I think we overshot and need to put fewer projects on each Micro Mirror.

I also think it's a good demonstration of how the MM with 8GB of RAM really are leaning on the fact that they're entirely SSD based.

This micro mirror with three projects is reading more from disk (left) than our big iron mirror with 384GB of RAM (right).

I so badly want to do something cute like hack a 2TB laptop spinner into these T620s, but it just doesn't make sense for how hot the entire working set is on these small mirrors.

There's an undergrad senior project for a stats major in here about how to model HTTP requests to a software mirror and what a good metric for "how close to full capacity" a mirror is operating. My 100Mbps base continuous load is just a gut feel.

The poisson distribution is a trash model for HTTP requests that should never be used when discussing load on update mirrors, but WHAT IS a good model?

We have caused our first negative impact for a project with our Micro Mirrors. :(

Not really MM specific, but it turns out that the sample code online for making pretty autoindex using XSLT has Nginx try and apply the template to EVERY file served, not just the generated directory listings.
github.com/AlmaLinux/mirrors/p

Solution was to be more specific that our autoindex and styling is only applied to paths ending in a forward slash
github.com/PhirePhly/micromirr

Someone decided to crawl codingflyboy.mm.fcix.net for all of EPEL over HTTP, which is... a choice.

But the hardware didn't really care. It was still able to serve it as fast as the other end was able to ask for it.

Good progress is being made by @warthog9 to get the fleet of Micro Mirrors to all stream their telemetry and logs back to a single watchtower host, so we're going to be able to monitor our fleet from a single pane of glass and generate ridiculous stack charts to compare this fleet of $200 thin clients against our $3500 heavy iron mirror.

It's the last 24 hours of data for just the codingflyboy.mm.fcix.net mirror. Pretty respectable for a software mirror running on a thin client!

Show newer

@kwf Maybe with nginx as a caching „frontend“ to the real mirror? Just allocate enough disk to the cache and a long expiry period?

@falk the problem is you don't want a long expiry for the main metadata index, and thin front ends that lose connectivity to the backend server starts getting into a lot of strange failure modes, so we decided to stick to stand alone mirrors

@kwf Oooh, didn’t think about the index files. And you’re right about the failure modes (valid index files with packages missing..)

Even with really small caches (on the order of 10GB-20GB) the cache hit rate is extremely high, but every time anyone tries to build these overly complex mirror setups, they inevitably end up in some weird split brain scenario that breaks things.

We want each to be a functionally independent mirror that meets distro's expectations about how it behaves and when they should disqualify it and take it out of the rotation.

@kwf I think that says more about the web browser than anything else.

@smitty You're going to have to set up a mirror all on your own buddy. Sorry

@kwf I’m trying to get more ham radio related projects on my systems. Make use of that AMPR network.

@smitty Definitely a more _interesting_ use of your network than pumping out Linux updates.

I'm just looking for idle capacity out in the world and shave off a little bit to be useful

@kwf I don't have any opinion on the stats. I just wanted to say this has been a fascinating thread and I really appreciate that you've been posting it over here on mastodon.

@kwf @zbrown

You can add FOSS Software Distribution Infrastructure Sausage Maker to your long list of courtly titles.

@kwf depending on the analysis you want to do, negative binomial might be better than Poisson (cf. johndcook.com/blog/2009/11/03/ and links from there). It isn't memoryless, so recursively reasoning about request counts gets harder, but it might be better as a distribution for a predictive model (because the variance and mean are no longer the same parameter, and you can think of it as a mixture of Poisson distributions). You could even consider en.wikipedia.org/wiki/Conway%E, as suggested in the comment.

@kwf it sounds like one thing you might gain out of this project (just looking at your thread here) is factoring apart the load so that the more popular subset of files is more widely available on a custom CDN of sorts. Depending on the over- or underdispersion of the distribution of HTTP requests, a simple model using the Poisson distribution may be more or less suitable.

@kwf It isn't clear to me, however, if you really need a super clever model for the overall distribution of HTTP requests. Maybe that's so because the right model would help you factor the distribution in a way that shows you how to break things apart onto micro mirrors.

This leads to a question: maybe once you have split things up onto micro mirrors, the load on each mirror would be more Poisson-like, or exhibit underdispersion. What would you expect or want to see?

@trurl I have no idea what I'm looking for.

We're focusing on the operational and political problems for micro mirrors. Numeric metrics for success criteria for them is a whole second ballgame.

@kwf how long does it usually take for traffic to ramp up on a new mirror?

@nicoduck Depends on the project.

Fedora stuff is pretty good since their portal is self-serve, so a few hours. Most projects are 2 days to 2 weeks.

Debian is the high end at 1-5 months to get your mirror added to the load balancer.

@kwf also tcp is too slow, the protocol too obnoxious sooooooo udp void screaming it is!

Its a lot faster now 😜

@kwf I like this concept and it has merit, especially if you have a stack of them and round robin the load across them, that will make up for slower port speed and other internals. What OS are you running on these and the big iron?

@Tubsta our plan was to try and get every one of these on a different network and in a different building.

Some projects support weights per mirror, and for those that don't we're using the argument that a 1G mirror with 4 projects is comparable to a 10G mirror with 40 projects.

The big iron is running Debian, but I used the mm as an opportunity to force myself to use Alma for a project.

@kwf if you are looking for others to mirror, the BSD projects would be nice to add. One of our large edu mirrors in Australia is what I am looking at for content inspiration. Though it looks like you get inserted into projects GeoCDN so users don't have to select your mirror??

@Tubsta FreeBSD doesn't seem to use community mirrors, so we weren't planning on mirroring them.

@kwf No, and it is quite frustrating for #FreeBSD users, especially in Australia. The other BSD projects (OpenBSD, NetBSD and DragonflyBSD) all have traditional mirror configurations.

@Tubsta @kwf it's really an area where IPFS would make sense, I wish this would be more production ready and widespread.

This is exactly the right tool to solve mirroring and getting data from the closest source without having to manage tiers 1/2/3 mirrors and make sure they are synchronized.

@kwf @Tubsta could you explain why? I'm really interested. I could be wrong indeed.

@solene @Tubsta Our single mirror, as one of hundreds of mirrors, is doing 10TB per day in traffic. And the time to first byte for requests is milliseconds. Does IPFS have the ability to meet those sorts of performance numbers?

If we started pulling 10TB per day out of the IPFS mesh, that bandwidth still needs to come from somewhere, and now we're moving it around multiple hops?

@solene @Tubsta If I understand correctly, IPFS uses a hash across the whole bucket as the URL, so we need to publish a new URL every time we update the directory contents?

@solene What problem with the traditional mirroring system is IPFS improving or performs better at?

What does all the additional operational complexity improve for distros, mirror operators, or users over the existing rsync + HTTP model?

@kwf @Tubsta just see ipfs as a dynamic torrent system. Instead of having to generate a torrent file that is fixed and can't be updated you could use an IPNS that defines the latest available content of your project over IPFS, then you can continuously keep synchronised to this ipns.

The point in my opinion is that a peer to peer model would allow to push a lot more bandwidth, especially when you get far from the original mirror. It also add resilience because you don't need to choose a mirror as an user, and so you shouldn't be concerned if one or multiples mirrors are down.

I wrote a about this topic last year and made an experiment by distributing OpenBSD packages over ipfs. If you want to read about it dataswamp.org/~solene/tag-ipfs

I'm convinced it would really work for distributing massive repository content such as an OS version archive. It's a really different model than using mirrors which is also a good model but both has pros and cons.

@solene @Tubsta I think you wrote the conclusion for me: " I disable the IPFS service because it's nearly not used and draw too much CPU on my server."

@solene @Tubsta Per usual when someone suggests we use IPFS, I should include a citation to @warthog9's paper on why Bittorrent really isn't that great for distros.

kernel.org/doc/ols/2008/ols200

@kwf @solene @Tubsta so random paper napkin math. Retail cost hosting a mirror costs about $5.55/TB moved, including hardware costs. This is patently high, and no mirror actually costs this.

Went to see what similar costs associated with ipfs would be just for straight monetary comparison and the web site is down? Gateway is down? Is the IPFS website hosted in IPFS maybe and the network is down?

Show newer

@kwf you're literally recreating the history of Bandaid. You just described the exact rationale for Bandaid XT.

@mxshift That's pretty encouraging to hear.

I suspect every distributed system goes through cycles of consolidation and then distribution closer to the edge.

Sign in to participate in the conversation
A Social Front Organization

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!