From Marketing to Data Engineering: How I Made the Switch
How one marketer followed the trail of tracking pixels into pipelines and built a career turning messy data into usable systems.
Fellow Data Tinkerers,
Lately, I’ve been thinking about starting a new series where people working in data share how they got here, what they’ve learned along the way and what their day-to-day looks like.
So, I’m kicking it off today with
, Senior Data Engineer at Workpath and writer of The Pipe and Line newsletter.We talked about how he went from marketing to data engineering, what his workflow looks like, why he was called an octopus and why he thinks “big data” is a fool’s errand for most teams.
So without further ado, let’s get into it!
Tell us about your role and company
I work as a senior data engineer at Workpath, a SaaS platform that helps enterprises execute strategy using OKR principles.
I’m the only data engineer here which means I own the full data stack from dbt models, AWS Aurora RDS patterns, Tableau pipelines, Airflow orchestration to everything in between.
Recently, I’ve also become the accidental ‘AI engineer’ since I’ve been experimenting heavily with AI agents and internal workflow automation.
My previous teammates have called me an octopus cause I can gather requirements, model data and build dashboards if needed. I enjoy being that bridge between business needs and technical systems.
How did you break into data engineering?
I actually didn’t start in tech. My background is in advertising and marketing.
About ten years ago, I began in digital marketing and really got into web analytics. That curiosity led me to run my own agency for a couple of years where I handled everything from Google/Meta Ads and no-code automations to outbound systems and website tracking. I learned a lot from running my own agency.
Over time, I realised I was more drawn to the technical side of data: debugging scripts, analysing event-driven data, making sure tracking worked and translating between tech and business teams.
So, I followed that pull: from web analytics to data analytics then analytics engineering and now data engineering. Moving through those layers gave me the full picture of how data flows, from capture to insights.
Alejandro’s path
web analytics → data analytics → analytics engineering → data engineering
What does a typical week look like?
Typical week?! What’s that? :)
I am usually transforming business needs into automated processes, building on AI use cases (Agents, RAG, MCP, workflows), working on product analytics models for internal reporting or just mapping out ideas to get my head around what’s the next most impactful thing that I should be doing.
Some weeks are hands-on coding and others are pure problem-solving on Miro boards. I prioritise whatever creates the most impact that week.
All of the above are only possible if there are no broken pipelines. Now that never happens, right? 😅
What kind of data do you work with?
About 90% of our data comes from backend customer data in Aurora RDS with the rest being raw API extractions that need custom parsing.
With our AI projects, I’ve started dealing more with unstructured data like PDFs, presentations, even images which adds a whole new layer of complexity.
What business problem are you solving?
Most of the time I enable analysts which means they will provide customers with the right data to keep following on their strategy progress and make the right decisions.
There’s no fixed KPI tied to my work but whether I’m building an AI agent, an Airflow DAG or a CI/CD pipeline, the principle is the same: enable people to use data effectively without friction.
What’s in your tech stack?
Languages: Python, SQL
Orchestration & Transformation: Airflow, dbt
Infrastructure: AWS (Aurora RDS, S3, ECR, Lambda)
Analytics: Tableau, Metabase
AI: Agno, pgvector, FastMCP
No-code tools: n8n - I’ve used it long before it was trendy. Perfect for quick automations where Airflow would be overkill.
How do you handle competing stakeholder requests?
It’s a classic small-company problem where there is no product manager. So you don’t have someone protecting and filtering it for you which means you have to do it yourself.
Over time I built a few filters:
Ask questions first. Most requests crumble under light questioning because they’re half-baked ideas with no clear scopes.
Build a minimal version. Sometimes you just build a minimal solution and the requester disappears for weeks which buys you time.
Group prioritisation. If none of the above works and everyone’s lobbing requests your way, get them in a room and make them decide what’s most important.
Turns out, 99% of “urgent” requests are just nice to have.
What most people don’t get about data engineering
When things work, no one notices. When something fails, everyone does.
Software engineers think we just manage databases; business users think we build dashboards. And truth is, “data engineer” means something different everywhere. some focus on ETL, others on DevOps and some on pure automation.
Personally, I see data engineering as the craft of making data usable. ETL is just 10% of it. Especially now, in the AI era, preparing data for intelligent systems is becoming just as critical as the pipelines themselves.
“I see data engineering as the craft of making data usable”
One thing you wish you knew earlier
How broad the skill set really is. It’s not just SQL and Python.
You need to understand containers, databases, orchestration, APIs, storage patterns and that makes it a hard field to enter.
Most people who break in do so from analytics roles. You can’t ‘study’ real data engineering the same way you learn coding basics so the best path is to find real problems at work and learn by solving them.
Any spicy takes?
Yes, the industry’s obsession with big data is misplaced.
Most companies don’t have “big” data; they have messy data. Yet you still see people talking about Spark clusters, Snowflake optimisations and billion-row queries when their real workload could fit comfortably in PostgreSQL.
I’ve seen companies spend thousands on Fivetran and Snowflake for workloads that a self-hosted Airbyte and Postgres instance could handle for a fraction of the cost.
This kind of hype confuses newcomers and fuels FOMO.
The truth? Most data engineering happens in small-scale, scrappy environments where you do a bit of everything and learn fast.
“Most companies don’t have ‘big’ data; they have messy data.”
If you enjoyed reading this, check out Alejandro’s newsletter where he shares practical insights on data engineering like this:
Was there a question that you would like to ask?
Let me know your thoughts by replying to the email or leaving a comment below!
If you are already subscribed and enjoyed the article, please give it a like and/or share it others, really appreciate it 🙏