Skip to content

AI & Automation Engineer

AI / Automation Engineer

AI and automation practice with the engineering work made explicit

I have not launched AI products for paying customers. What I can show is local model work, FastAPI experiments, prompt documentation, and small automation workflows with clear limits.
  • Convo-AI: FastAPI backend, local model orchestration, and environment setup work.
  • Automation: scripts and workflows that support personal projects and content pipelines.
  • Documentation: prompt notes, disclosure of AI-assisted code, and manual review steps.
Honesty upgrade

Clear scope, upfront

What I have

  • Local AI experiments that run on my personal development machine.
  • Documented prompts, edits, and limitations in READMEs.
  • Basic automation scripts tied to personal projects.

What I am still working toward

  • Production AI integrations or real-user telemetry pipelines.
  • Enterprise guardrails, audits, or policy enforcement.
  • Large-scale orchestration across teams and systems.

What I’m doing next

  • Better evaluation workflows for outputs and hallucinations.
  • Richer prompt orchestration with queues + storage.
  • Security and privacy reviews before public releases.
Reality snapshot

Current focus

Local-first experiments

The engineering part of this work is in the backend setup, local model hosting, environment configuration, and getting the workflow to run reliably on one machine.

  • Convo-AI runs a FastAPI backend with Ollama locally for chat workflows.
  • No services are hosted for external users; all workloads run locally.

Documentation + prompts

I treat prompt work like another engineering artifact: useful only if it is documented, reproducible, and paired with manual review.

  • Prompt libraries and README logs documenting which content was AI-drafted and which content was manually edited.
  • Projects include TODOs for evaluation, safety, and more reliable workflow control.
Work samples

Proof on GitHub

Convo-AI

Engineering focus: FastAPI backend, local model workflow, environment variables, and end-to-end local setup.

  • FastAPI backend + simple UI for local chat flows.
  • Uses Ollama models and environment variables documented in the repo.
  • Disclosure: AI drafted the initial version of most endpoints; prompts and subsequent edits are documented in the README.

Proof links: Convo-AI

AI workflow notes

Engineering focus: how prompts, validation, disclosure, and manual review fit into the build process instead of replacing it.

  • Documented how I structure AI-assisted builds and where I still rely on manual checks.
  • Focus is on transparency regarding what AI drafted and what was rewritten manually.

Proof links: AI workflow post

Tools

What I’m experimenting with

Python + FastAPI — learningNode.js / Express — prototypesLangChain — exploringOllama + local LLMs — local onlyOpenAI / Anthropic APIs — experimentsSupabase — exploring vector storesGitHub Actions — small deploys

Each repository labels working features versus experimental or aspirational features to indicate maturity level.

Help wanted

What I still need to learn

  • Responsible AI guardrails (policy checks, escalation paths) in production environments.
  • Measuring ROI beyond “this feels faster on my laptop.”
  • Scaling prompt orchestration with queues, storage, and audit requirements.
  • Security/privacy reviews for AI features before they reach real users.

If you mentor junior engineers on applied AI or automation, I am open to pairing sessions.