0️⃣3️⃣ March update

Since December, I've been deeply engaged in exploring the possibilities and challenges surrounding using LLMS (rather than training, after getting access to GPT3 from Stitchfix training and even fine tuning felt less significant than learning now to use these tools. In December I left Stitchfix to both recharge from a hand injury and to also explore other opportunities.

Here's a brief overview of my recent thoughts, projects, and plans:


  • Journaling app (jrnl.jxnl.co): Developing an agent with short-term and long-term memory that acts as a life coach for users, utilizing summarization and embedding search techniques while exploring other aspects like how privacy would work, how users want to interact with these agents, and how tool use can improve user experience and how tool use can be abused or malfunction.


  • Importance and limitation of of human-in-the-loop systems: Recognizing the value of incorporating human feedback to enhance safety and reliability, particularly in sensitive applications like the journaling use case.
  • Data privacy and compliance: Acknowledging the limitations of third-party services like OpenAI and exploring additional measures to handle user data responsibly.
  • Thinking also about LLM ops tools, since every tool is new and few people have build complex apps yet I hope my experience with building tools can also give me insight on the kinds of tools other builders might want to use
  • Safety considerations: Exploring various safety concerns, such as securing tools built on LLMs, prompt injection, and potential large-scale manipulation of training data.
  • Critique of LangChain: Reflecting on disagreements with the design of LangChain and considering alternative approaches.
  • Exploring a fun art project focusing on humans interacting with ai given current limits in ai memory and agency.


  • Experimenting with developing personalized personas, temporal awareness, and agency in the journaling chat app to improve users perception of agency and memory
  • Working on more detailed citation schemes in LLMs to prevent hallucinations and improve reliability.
    • improving embedding search with embedding fine tuning over in-context generation using triplet loss from search results
  • Continuing to explore new approaches and techniques for overcoming the challenges of working with LLMs and keeping up with literature.