Popular Posts

Popular Content

Powered by Blogger.

Search This Blog

Follow on Google+

Recent Posts

About us

In my long career as an “almost digital entrepreneur” (a fancy way to say I’ve tried a thousand things online without making a single cent), I never really felt that “this is it, I’m so close, I’ll finally quit everything and update my passport: job title? SaaS founder.”

(Small detail: I don’t even have a passport. But I like to imagine that if I did, I’d want something cooler than “unemployed creative” written on it).

For years, I collected side projects, hobbies, half-dead MVPs, and random nonsense, all with the same ending: super hyped at the beginning, burned out in the middle, completely abandoned by the end.

But a couple years ago, I decided to take things more seriously (well… I try). I started building SaaS products. Simple, fast stuff, nothing too fancy. And finally, after a long toxic relationship with perfectionism, I realized something super basic but actually powerful: I don’t need thousands of users. I just need 1.2 paying users a day. Literally.

Not to get rich, no Lamborghinis parked outside (also, I live in an apartment with no garage), but enough to live well, keep building, and maybe say “this is my job” without looking down in shame.

It’s part math, part mindset. Like they told us in the first year of computer science: big problems get solved by breaking them into smaller ones. 100 users a day? Anxiety. 1.2 users a day? I can breathe.

So yeah, this is my new mantra: “1.2 a day to keep the office job away.”

Let’s see where this road takes me


Comments URL: https://news.ycombinator.com/item?id=43847305

Points: 9

# Comments: 5



from Hacker News: Front Page https://ift.tt/F3gxqUO
Continue Reading

Hi HN! I made Beatsync, an open-source browser-based audio player that syncs audio with millisecond-level accuracy across many devices.

Try it live right now: https://www.beatsync.gg/

The idea is that with no additional hardware, you can turn any group of devices into a full surround sound system. MacBook speakers are particularly good.

Inspired by Network Time Protocol (NTP), I do clock synchronization over websockets and use the Web Audio API to keep audio latency under a few ms.

You can also drag devices around a virtual grid to simulate spatial audio — it changes the volume of each device depending on its distance to a virtual listening source!

I've been working on this project for the past couple of weeks. Would love to hear your thoughts and ideas!


Comments URL: https://news.ycombinator.com/item?id=43835584

Points: 7

# Comments: 1



from Hacker News: Front Page https://ift.tt/g32lBPu
Continue Reading

I thought the HN crowd would appreciate this story I wrote about the keeper of a university's lab animals. In reporting articles about science, and being a biology-watcher generally, I’ve had an uneasy time squaring my enthusiasm for cutting-edge biomedical research with the fact that this research so regularly involves breeding animals just to give them diseases and kill them. This is done as humanely as possible, of course.

The contrast hit me most vividly during the pandemic when I was writing an article about the immune system. [1] One of the scientists I spoke to told me about putting hamsters on warming plates, picking them up gently — in general, caring for and about them — and then feeling grief at their deaths. But of course understanding the immune response during infection with covid was a worthy cause. I felt no judgement towards this scientist; they are in a difficult position.

There was another more direct bit of inspiration, when I read this article [2], in 2023, about the toll that caring for laboratory animals could take on people’s mental health:

> Besides the symptoms Sessions experienced, those who handle lab animals may face insomnia, chronic physical ailments, zombielike lack of empathy, and, in extreme cases, severe depression, substance abuse, and thoughts of suicide. As many as nine in 10 people in the profession will suffer from compassion fatigue at some point during their careers, according to recent research, more than twice the rate of those who work in hospital intensive care units. It’s one of the leading reasons animal care workers quit.

That left an impression on me, and also armed me with a character: the forgotten-about, somewhat miserable vivarium worker.

The story obviously takes many liberties with fact — it is fiction — but I also tried to ground it in reality, and stuff that you might think I made up (the guillotine, the crazy VR sphere in the first paragraph), I did not.

I hope you enjoy! If nothing else I expect you’ll appreciate the illustrations, done by my friend Ben Smith [3].

[1]: https://www.newyorker.com/magazine/2020/11/09/how-the-corona...

[2]: https://www.science.org/content/article/suffering-silence-ca...

[3]: https://www.stephenbonesproductions.com/


Comments URL: https://news.ycombinator.com/item?id=43710761

Points: 45

# Comments: 13



from Hacker News: Front Page https://ift.tt/dImfsWD
Continue Reading

Today, I noticed that my behavior has shifted over the past few months. Right now, I exclusively use ChatGPT for any kind of search or question.

Using Google now feels completely lackluster in comparison.

I've noticed the same thing happening in my circle of friends as well—and they don’t even have a technical background.

How about you?


Comments URL: https://news.ycombinator.com/item?id=43619768

Points: 38

# Comments: 107



from Hacker News: Front Page https://ift.tt/rYyIdQK
Continue Reading

Last week was big for open source LLMs. We got:

- Qwen 2.5 VL (72b and 32b)

- Gemma-3 (27b)

- DeepSeek-v3-0324

And a couple weeks ago we got the new mistral-ocr model. We updated our OCR benchmark to include the new models.

We evaluated 1,000 documents for JSON extraction accuracy. Major takeaways:

- Qwen 2.5 VL (72b and 32b) are by far the most impressive. Both landed right around 75% accuracy (equivalent to GPT-4o’s performance). Qwen 72b was only 0.4% above 32b. Within the margin of error.

- Both Qwen models passed mistral-ocr (72.2%), which is specifically trained for OCR.

- Gemma-3 (27B) only scored 42.9%. Particularly surprising given that it's architecture is based on Gemini 2.0 which still tops the accuracy chart.

The data set and benchmark runner is fully open source. You can check out the code and reproduction steps here:

- https://getomni.ai/blog/benchmarking-open-source-models-for-...

- https://github.com/getomni-ai/benchmark

- https://huggingface.co/datasets/getomni-ai/ocr-benchmark


Comments URL: https://news.ycombinator.com/item?id=43549072

Points: 12

# Comments: 1



from Hacker News: Front Page https://ift.tt/7Kmchjz
Continue Reading

I believe the best way to learn a language is by doing an in-depth project. This is my first Zig project intended for learning the ropes on publishing a Zig package. It turns out to be quite solid and performant. It might be a bit over-engineered.

This little library is packed with the following features:

  - Building dependency graph from dependency data.
  - Performing topological sort on the dependency graph.
  - Generating dependence-free subsets for parallel processing.
  - Cycle detection and cycle reporting.

Comments URL: https://news.ycombinator.com/item?id=43549618

Points: 23

# Comments: 5



from Hacker News: Front Page https://ift.tt/Fh9aeC8
Continue Reading