Skip to content

AI & Automation Engineer

AI / Automation Engineer

Practicing AI + automation with transparent limits

I have not launched AI copilots for paying customers. This page documents AI and automation prototypes I have built while learning with ChatGPT, Copilot, and local LLMs.
Honesty upgrade

Clear scope, upfront

What I have

  • Local AI experiments that run on my personal development machine.
  • Documented prompts, edits, and limitations in READMEs.
  • Basic automation scripts tied to personal projects.

What I don’t have yet

  • Production AI integrations or real-user telemetry pipelines.
  • Enterprise guardrails, audits, or policy enforcement.
  • Large-scale orchestration across teams and systems.

What I’m doing next

  • Better evaluation workflows for outputs and hallucinations.
  • Richer prompt orchestration with queues + storage.
  • Security and privacy reviews before public releases.
Reality snapshot

Current focus

Local-first experiments

  • Convo-AI runs a FastAPI backend with Ollama locally for chat workflows.
  • No services are hosted for external users; all workloads run locally.

Documentation + prompts

  • Prompt libraries and README logs documenting which content was AI-drafted and which content was manually edited.
  • No production integrations or enterprise telemetry; these are learning projects with documented TODOs.
Work samples

Proof on GitHub

Convo-AI

  • FastAPI backend + simple UI for local chat flows.
  • Uses Ollama models and environment variables documented in the repo.
  • Disclosure: AI drafted the initial version of most endpoints; prompts and subsequent edits are documented in the README.

Proof links: Convo-AI

AI workflow notes

  • Documented how I structure AI-assisted builds and where I still rely on manual checks.
  • Focus is on transparency regarding what AI drafted and what was rewritten manually.
  • Used to keep expectations realistic for collaborators.

Proof links: AI workflow post

Tools

What I’m experimenting with

Python + FastAPI — learningNode.js / Express — prototypesLangChain — exploringOllama + local LLMs — local onlyOpenAI / Anthropic APIs — experimentsSupabase — exploring vector storesGitHub Actions — small deploys

Each repository labels working features versus experimental or aspirational features to indicate maturity level.

Help wanted

What I still need to learn

  • Responsible AI guardrails (policy checks, escalation paths) in production environments.
  • Measuring ROI beyond “this feels faster on my laptop.”
  • Scaling prompt orchestration with queues, storage, and audit requirements.
  • Security/privacy reviews for AI features before they reach real users.

If you mentor junior engineers on applied AI or automation, I am open to pairing sessions.