rozenmd 3 hours ago

Great write-up!

I had a similar project back in August when I realised my DB's performance (Postgres) was blocking me from implementing features users commonly ask for (querying out to 30 days of historical uptime data).

I was already blown away at the performance (200ms to query what Postgres was doing in 500-600ms), but then I realized I hadn't put an index on the Clickhouse table. Now the query returns in 50-70ms, and that includes network time.

mperham an hour ago

Seems weird not to use Redis as the buffering layer + minutely cron job. Seems a lot simpler than installing Kafka + Vector.

nasretdinov 2 hours ago

BTW you could've used e.g. kittenhouse (https://github.com/YuriyNasretdinov/kittenhouse, my fork) or just a simpler buffer table, with 2 layers and a larger aggregation period than in the example.

Alternatively, you could've used async insert functionality built into ClickHouse: https://clickhouse.com/docs/optimize/asynchronous-inserts . All of these solutions are operationally simpler than Kafka + Vector, although obviously it's all tradeoffs.

  • devmor 2 hours ago

    There were a lot of simpler options that came to mind while reading through this, frankly.

    But I imagine the writeup eschews myriad future concerns and does not entirely illustrate the pressure and stress of trying to solve such a high-scale problem.

    Ultimately, going with a somewhat more complex solution that involves additional architecture but has been tried and tested by a 3rd party that you trust can sometimes be the more fitting end result. Assurance often weighs more than simplicity, I think.

    • nasretdinov 2 hours ago

      While kittenhouse is, unfortunately, abandonware (even though you can still use it and it works), you can't say the same about e.g. async inserts in ClickHouse: it's a very simple and robust solution to tackle exactly the problem the PHP (and some other languages') backends often face when trying to use ClickHouse

tlaverdure 2 hours ago

Thanks for sharing. I really enjoyed the breakdown, and great to see small tech companies helping each other out!

frenchmajesty 2 hours ago

Thanks for sharing I enjoyed reading this.

albertgoeswoof an hour ago

Currently at the millions stage with https://mailpace.com relying mostly on Postgres

Tbh this terrifies me! We don’t just have to log the requests but also store the full emails for a few days, and they can be up to 50 mib in total size.

But it will be exciting when we get there!