Christian Olear (a.k.a. Otsch)
Fullstack Web Developer (PHP Backend Schwerpunkt)
aus Linz (🇦🇹)
I can support you with:
- the development of tailored web apps and APIs,
- modernizing PHP legacy systems, and especially
- projects involving web crawling and scraping – my specialty.
Feel free to get in touch – I’m curious to hear about your project!
Services
Web App Development
Are you planning a web project that goes beyond a typical, static company website? Do you need tailored functionality? Maybe you’re even planning to build a SaaS application? Then you’re in the right place. I develop solid, custom web apps and REST APIs.
Read morePHP Legacy Modernization
You have an existing PHP project that’s a bit outdated? It lacks tests, a clear structure – maybe it was even built entirely without a framework? Making changes is becoming increasingly risky and time-consuming? I’ll help you modernize the code step by step – carefully and transparently.
Read moreWeb Crawling & Scraping
You’re planning a project that involves continuously collecting data from across the web - automatically and at scale? That’s my specialty! I’m the developer of the PHP library crwlr.software and the SaaS service crwl.io, which is built on top of it. I’m happy to support you in planning and implementing your solution.
Read moreWhat My Clients Say
We have experienced our collaboration with Christian as consistently positive. He took the time to thoroughly understand our requirements, presented different solution approaches, and helped us make the best decision. We especially value his reliability – everything we agreed on was implemented quickly and with great quality. It is clear that he is highly skilled and enjoys developing tailored solutions. For technical special requirements, he has proven to be an absolutely competent and pleasant partner to work with.
Working with Otsch was extremely helpful for us: He not only configured our first crawlers in crwl.io, making it much easier for us to get started, but also developed custom extraction logic and supported us quickly and competently at all times. His experience in the field of web crawling was evident in every phase.
Blog
Have you ever deployed your website or web app, only to discover hours later that you’ve introduced bugs or broken links? Or do you clear the cache with every deploy, leaving the first users to experience slow performance? In this guide, you’ll learn how to use a crawler to automatically detect errors and warm the cache, ensuring your site runs smoothly after every deployment.
Read moreVersion 1.8 of the crwlr/crawler package is out, introducing key new functions that will replace existing ones in v2.0. Addressing previous issues with composing crawling result data, this update provides a solution that enhances performance, minimizes memory usage further, and simplifies the process, making it more intuitive and easier to understand.
Read moreSince working with generators can be a bit tricky if you're new to them, this post offers an intro on how to use them and highlights common pitfalls to avoid.
Read moreAbstract classes cannot be instantiated directly, posing a challenge when testing functionality implemented within the abstract class itself. In this article, I will share my approach to addressing this issue.
Read moreThis is the first article of our "Crwlr Recipes" series, providing a collection of thoroughly explained code examples for specific crawling and scraping use-cases. This first article describes how you can crawl any website fully (all pages) and extract the data of schema.org structured data objects from all its pages, with just a few lines of code.
Read moreIs it decreasing and what to do about it?
My friend Florian Bauer recently posted an article saying that PHP needs a rebranding and that he would rename it to HypeScript. Here's my two cents on that subject.
Read moreI'm very proud to announce that version 1.0 of the crawler package is finally released. This article gives you an overview of why you should use this library for your web crawling and scraping jobs.
Read moreVersion 0.6 is probably the biggest update so far with a lot of new features and steps from crawling whole websites, over sitemaps to extracting metadata and schema.org structured data from HTML. Here is an overview of all the new stuff.
Read moreWe're already at v0.5 of the crawler package and this version comes with a lot of new features and improvements. Here's a quick overview of what's new.
Read moreThere is a new package in town called query-string. It allows to create, access and manipulate query strings for HTTP requests in a very convenient way. Here's a quick overview of what you can do with it and also how it can be used via the url package.
Read more