close icon
daily.dev platform

Discover more from daily.dev

Personalized news feed, dev communities and search, much better than what’s out there. Maybe ;)

Start reading - Free forever
Start reading - Free forever
Continue reading >

Scaling a side project: The story of daily.dev

Scaling a side project: The story of daily.dev
Author
 Ido Shamun
Related tags on daily.dev
toc
Table of contents
arrow-down

🎯

daily.dev has gone through different phases. It started as a side project developed by a single person and then evolved into an open-source project with active contributors and a vibrant community. It eventually turned into a company with over 20 people, hosting multiple engineering teams and serving a user base in the hundreds of thousands. This post explores the system's evolution to meet each phase's demands.

Ideation

daily.dev emerged from the daily grind of trying to keep pace with the ever-evolving developer ecosystem. Each day brings forth something new, making the task challenging and, at times, exhausting. However, staying up-to-date is a linchpin for the professional growth of software developers. I used to have a slack channel with many rss subscriptions and twitter accounts linked to it, and I reviewed it almost every day. I partnered with Nimrod (product, currently the CEO of daily.dev) and Tsahi (design), and together we embarked on the challenge: helping developers stay up to date. Nowadays, daily.dev expanded, and it's also a professional network for developers.

The "Garage" phase

Side project development is very different than developing in a day job.
Your available time is limited, perhaps 2-3 hours a day, with weekends as the battleground for bridging the productivity gap. Realistically, committing such extensive hours every week is often unattainable.
Thus, efficiency becomes paramount—time must be wielded judiciously, priorities set, and every available tool harnessed. The strategy boils down to a determined effort to cut corners wherever feasible.

In our inception, I opted for familiarity, choosing technologies I were already proficient with, and embraced simplicity. The application took shape in React, backed by Node.js and PostgreSQL. The API project used Google App Engine, while the news discovery and scraping content pipeline leveraged Google Functions. Cloud SQL served was used to provision our PostgreSQL instance. Given our initial focus on a browser extension, a dedicated frontend hosting solution wasn't needed.

The selection criteria for services revolved around "zero ops". While these services are not flexible, they provide peace of mind, speed up development, and often coupled with generous free tiers. Superfeedr was vital in this early architecture — a managed service for RSS subscriptions, triggering webhooks on every new post. To move fast, we invested in this service rather than reinventing the wheel.

In less than a month, we launched daily.dev and submitted the extension to the Chrome Store.

Making some money 🤑

Building a side project requires every drop of motivation you can get. For us, it was generating revenue. The easiest way to do it was to introduce ads in the product. Given that our primary component is the feed, we thought it should be quick to add an ad placement. But it was obvious to us that the ads shouldn't interfere with the user experience and should provide a native experience as part of the feed. Also, we don't want to promote casual products like shoes and stuff like that. We want to promote developers-oriented products, cloud providers, databases, productivity tools, etc. At that time, two major players, BSA (Carbon ads) and Codefund (no longer exists), existed in the domain of developers-oriented ad networks. We partnered with them and created a tiny ad server to toggle between them. We didn't want to rely on a single network. And we were right. Making money was a true motivation for us to keep the project going. We started from hundreds of dollars a month and gradually scaled from there to a point where the three of us could leave our day job and work full-time on daily.dev.

Launching our webapp

As we attracted more developers to use daily.dev, we realized an extension is not enough as some users want to use daily.dev from mobile or simply don't like the new tab experience. This was where we introduced our webapp. We chose nextjs to power our web application. We created a monorepo for our frontend, which consists of three major components: Our shared library (includes the design system, hooks, and many other reusable elements), the browser extension, and the newly created webapp. We still use this structure to this day as it helps us keep the extension and webapp in sync with little hassle. Vercel is our deployment solution due to its smooth DX and native nextjs support. Thanks to Vercel and GitHub integration, every commit on main branch gets deployed to production, and every PR gets a preview deployment, which is very useful for internal testing.

Hiring the first engineering team

Once we had enough funds, I started hiring engineers. This was a pivotal moment for the project. Finally, there was more than a single full-time developer working on it. I hired two talented web engineers that focused on building the frontend and the API. This gave us time to dedicate to infrastructure and more deep backend work. At that point, I knew we would expand further, and I needed to lay the foundation for scaling the team and architecture. We already had our primary API service, some cron jobs, and cloud functions, so to streamline our deployments and provide better DX, we decided on two things:

  • Migrating to kubernetes - As I mentioned, our deployments were everywhere. We had app engine, cloud functions, and managed cron jobs on google cloud, and it was not streamlined and hard to manage everything. Kubernetes provides a cloud agnostic environment for scheduling containers supporting apis, background workers, cron jobs, and other use cases. Instead of spreading our deployments to different platforms, forcing us to learn the best practices of multiple systems. We can utilize kubernetes, which is already adopted by many companies and has a thriving community. And that's exactly what we did, and we're delighted with the transition to this day.
  • Embrace pulumi - it allows you to write infrastructure the same way you write your code. Imagine provisioning a kubernetes cluster using typescript, crazy huh? That's exactly what we do, thanks to pulumi. We use their TS SDK, and we built an abstraction layer on top to make it easy to follow our best practices and automate deployment. I'm still the main contributor to our internal repo. Although it's slowly changing, it provides building blocks to other engineers so they don't need to be kubernetes experts and understand how to autoscale their service and know the nuances of kubernetes.

Building an analytics solution

Our story continues as we need to scale our analytics solution. We started with Google Analytics, followed by Amplitude until we reached the point where it was too expensive to use a third-party solution. According to our calculation, it would have cost us thousands of dollars a month, given our growth rate. Also, the data is not ours, and we can't use it for other purposes. We decided it was time to build an in-house data pipeline with a warehouse that we can utilize for many purposes.
Our architecture of choice was a simple api written in go (more about why go below) that gets the events from the client and sends via google pub/sub downstream. A go worker fetches the events, does some processing, and stores them in bigquery. We also made a custom react tracker that provides flexible and reliable tracking. This architecture proved itself, and we changed very little as we grew. We hooked bigquery to preset to provide a dashboarding solution. Sometimes, we also sprinkle a bit of python for more complex analysis. Today, we process millions of events every day, and it costs less than $2k/month, including all the analysis we do, and we get to own the data.

Personalizing feed in real-time

Back then, our feed algorithm was very simple. It was based only on the number of clicks, and we filtered the content by your followed and blocked tags. We hit the point where we needed more as we had a much more diverse audience, and the need for more complex models and personalization was significant. We wanted to rank posts based on CTR (click-through ratio) in near real-time. For that, we had to consider impressions. The writing rate and storage are not something our api database could have handled. This was where we introduced tinybird for powering our feed. Tinybird is a managed solution on top of clickhouse that allows quick iterations of data pipelines and turning a sql query into an http endpoint. Thanks to clickhouse, materialization happens on ingestion time, allowing us to do real-time calculations. Tinybird offers importing BigQuery events, which is perfect for us. So, we streamed all our events to tinybird and created a query that our API can fetch. This allowed us to take into account the massive amount of impression events we generate and personalize the users' experience. We calculated ranks per tags and source for every user periodically and adjusted the feed ranking accordingly. And it was very straightforward thanks to clickhouse materialization features.

Migrating the content pipeline to Temporal

Our inventory of posts is crucial to ensure the users' happiness. The choreography architecture for our content pipeline was enough to get us started. Still, from an operational standpoint, monitoring the process and understanding when things go wrong was challenging. From a development standpoint, it took a lot of work to introduce long workflows and even play around with the order of different steps per source or any other parameter. And we need this flexibility to support new product features. For example, adding a step to generate summary for every article, tracking the engagement of an article on different websites and only once it reaches a certain threshold add it to our feed and more. Then, we realized we needed a different solution, and the choreography architecture was not good anymore. After much research, we decided that the temporal is our way to go. The selling point, just like with pulumi, is to write your workflows the same way you write your code. It has SDKs for most primary languages, and typescript is one of them. It even allows adding a failsafe sleep command for several days, which was amazing!
With this tool in hand, we refactored our content pipeline to temporal, and this gave us a clear view of the workflows that are running, allowed us to run some of the workflows for three days, and, most importantly, allowed us to define workflow per the source we scrape — providing us the ability to deliver more content to our users.

Hiring a platform team

At this point, we realized that we needed to scale our backend team (what we call platform). We have too much to do, and one person is not enough. Deciding on the criteria for this team was challenging, but since the team should deal with performance-critical services, we decided that go would be the standard language for this team. We had one or two go services at that time, but most of the services were still typescript. We hired a very experienced team lead, followed by team members to help him. They had two primary objectives: to create a dedicated feed service and migrate our content pipeline to go. A new feed service would allow more complex algorithms and improve performance. Of course, it still utilizes tinybird as its data provider. As for the content pipeline, we experienced memory leaks with the typescript sdk. Every instance required a significant amount of memory. Moving to go reduced the memory footprint and provided a more consistent load on the system. Go is still our primary language for most projects which are not the frontend or our api. It is compiled to binary, has a low memory footprint, allows fine-grained control and performance optimizations, and has a rich ecosystem.

Conclusion

We learned so much about scaling a system from mere nothing to a system that handles the scale of hundreds of thousands of users. Our primary product is our new tab extension. Can you guess how many new tabs happen every day at this scale? 😉

The "build vs buy" dilemma shifts over time. When we started, we bought every tool to cut development time. But as we evolved, we leaned towards different services that provide us with infrastructure and flexibility instead of a specific application. For example, instead of using superfeedr for rss, we use temporal for managing workflows. Usually, the latter is also more cost-effective in terms of scale but requires more development effort.

Once the product matures and you reach product market fit, you can shift the balance between moving fast and doing it the right way. You can think about scale, tech debt, maintainability, operational cost, etc. You also have time to fix the hacks you did when you moved hyper-fast. It takes time, but you'll get there eventually. Both moving fast and doing the right thing have tradeoffs. Pick your poison. A classic example of the above, is our first content pipeline. We first built it using cloud functions, then migrated to temporal while still using typescript to support more product requirements, and eventually we migrated to go to reduce memory footprint, increase scalability of the system and introduce new system design to increase our velocity. On the other hand, our AI Search feature was built in "the right way" from the first day as knew it's here to stay. So we built the system with full traceability, performance oriented and auditing.

Automation and documentation are the keys to keeping a consistent developer experience. It's critical when you transition from a single developer mode to an engineering team. Every team member has different skills and preferences. We have had automation from day zero for testing, deployments, and cron jobs, and we highly recommend it. We also have docs to describe our best practices, onboarding, nuances of every project, and architecture, making adopting the codebase easier.

DevOps is everyone's responsibility. This is a vital part of our engineering culture. Developers write the infrastructure as code part of their project and take ownership of every aspect of their system end-to-end. Some engineers are in charge of developing building blocks for others, so they don't need to be experts in kubernetes and know the best practices.

Each key learning here deserves a blog of its own, but this should give you an overview of what it takes and what the journey of daily.dev looks like.

Why not level up your reading with

Stay up-to-date with the latest developer news every time you open a new tab.

Read more