I'm Ismail

FULL-STACK ENGINEER
Solutions expert specializing in MVP, SaaS, and AI
Profile Photo

I learn fast. With every large project, there are new technologies to test and implement. I have proven success of always making the most of the latest tools. 🚀

I love optimizing— whether it's an SQL query, a React component render, a dev workflow enhanced by type-safety, getting the most out of Sentry, I find the best way forward. 🧠

I actually enjoy debugging. Memory leak React? FastAPI backend exponential backoff feature bug? Retry logic. Missing dependency? Dependency trace. 🔍

2buy2 Alex Event Ignite slingshot Horizon's End Lean Focus Neshealth Numberfit Thea
Selected Work

My Projects

I had a leading role on these projects working directly with founders/entrepreneurs, creating solutions and writing clean code.

  • AshBuilder

    THE STORY BEHIND THE CODE: A DEEP DIVE

    Summary with focus on technical aspects

    Six months of high-paced startup experience supporting a production ML-ops application built with Next.js, tRPC, and TypeScript. Contributed to new feature development, wrote end-to-end tests using Cypress, and led major frontend and database performance optimizations, including refactoring efforts.

    App description

    Slingshot is a well-funded startup tackling the mental health crisis, based on the insight that "there just aren't enough people or resources to help everyone in need." In January 2024, they began building a foundational AI model for psychology, resulting in Ash—an AI therapist mobile app.

    To support this, the team developed AshBuilder, a web app designed to centralize the creation and annotation of therapeutic dialogues by writers—either psychology experts or experienced writers studying psychological literature. These dialogues, along with annotations on model responses, were used to fine-tune internal AI models developed by the ML team.

    Writers could create conversations by typing, roleplaying (e.g., one as therapist, one as patient), or using speech-to-text features while recording themselves. They could also generate responses from a variety of state-of-the-art LLMs (e.g., GPT-4o by OpenAI, LLaMA or Mistral hosted on Groq, or Gemini by Google DeepMind), as well as internal models, to identify failure cases and contextual weaknesses. The system allowed dynamic integration of new LLMs and removal of unused ones.

    Another key feature was the ability to preview and annotate anonymized production conversations from the Ash chatbot, without storing this content in AshBuilder's database—respecting data separation and privacy policies.

    The app included robust role-based user management. For example, certain writers could be granted access to create therapeutic dialogues end-to-end while being restricted from viewing sensitive AI-chat production conversations.

    Additional LLM-assisted tooling was introduced, such as:

    LLM-based prompt generation, helping writers test various prompts for generating better responses from the internal model.

    LLM-based Context generation, allowing users to quickly understand the full context behind a problematic AI response—potentially hundreds of messages into a conversation—without reading the entire thread.

    Technical details

    Initially, before joining the efforts, the app was built using NextJS, React-Query, Supabase, and TypeScript, not fully adhering to clean code principles, as the initial goal was to have writers get to work as soon as possible.

    The first challenge I got to work on was a full refactoring, introducing Zod and tRPC to maximize type-safety, which significantly reduced bugs when implementing new features or making changes. I also redesigned parts of the PostgreSQL (Supabase) schema to improve performance. For example, I changed the message storage strategy from one message per row to a single JSONL column due to inefficient querying patterns when retrieving full conversations—specifically, the need for JOIN operations over hundreds of rows per session, which introduced latency and complexity. Since the common usage pattern involved always reading entire conversations rather than individual messages, storing them as JSONL drastically improved performance. While there was a need to update individual messages, early experimentation showed this wasn't a bottleneck in practice (updating relatively large JSONL was fast), and the performance benefits of reading full conversations as a single document far outweighed the trade-offs. To better understand the writers' workflows while contributing to the codebase, I wrote end-to-end tests using Cypress and integrated them into the existing CI workflow with GitHub Actions. The tests ran against Vercel preview deployments, ensuring reliable functionality in every preview, before merging in.

    Continuous sharpening of debugging skills, effective communication, and effective use of development tools were essential in this role, as any downtime or bugs could block writers during their most creative and cognitively demanding work—something we worked hard to avoid. As part of improving debugging skills, I learned to set up and efficiently use the VSCode debugger for full-stack NextJS debugging.

    A key consideration was ensuring that writers never lost their work in progress. To address this, we heavily relied on local storage, allowing for seamless recovery of data in case of unexpected interruptions. One of the reasons we chose Jotai for global state management was its built-in support for a persistent state, making it an ideal fit for this requirement. This ensured that writers could continue their work without worrying about losing any unsaved progress.

    LLM-based context generation refers to the process of using a large language model (LLM) like GPT-4o to provide context around a specific AI response in a conversation. This is particularly useful when dealing with problematic responses that might be hundreds of messages into a conversation. Instead of manually reading through the entire thread, the system dynamically generates a concise context, helping users quickly understand the relevant background information leading up to the issue.

    From a technical standpoint, we implemented this by using a hardcoded prompt that would extract the beginning portion of the conversation—up to the token limit of GPT-4o—ensuring that enough context was captured to inform the analysis. To handle the high-frequency demand for this context generation, we utilized Upstash-hosted Redis for caching. We generated unique hashed keys based on the content of the conversation messages, which allowed us to cache the context analysis and retrieve it efficiently. This caching ensured that as long as the message content didn't change, all writers would receive the same cached context, reducing redundant processing. To adhere to privacy policies, we encrypted the message content before storing it in Redis, ensuring that the data was only accessible through the internal service, maintaining confidentiality.

    Given the importance of quick debugging, I gained extensive experience setting up and using Sentry to capture detailed insights during troubleshooting. For example, when writers experienced noticeable latencies while using LLMs, I implemented time-to-first-token tracking. This allowed us to pinpoint whether the latency was caused by the LLM providers, our system, or the writers' internet connection, enabling us to optimize the process more effectively.

    As part of improving backend performance, particularly in anticipating bottlenecks as the volume of annotations and conversations grew, I gained hands-on experience with SQL query optimization and utilized query analysis tools to identify inefficiencies. This allowed us to fine-tune queries and ensure the system could scale effectively as data increased.

    Due to the fast-paced nature of the work, I learned the importance of avoiding premature optimizations and focusing on performance improvements only when necessary. For instance, when writers began experiencing lags that hindered their productivity, I implemented lazy loading and virtualization to address the issue, ensuring a smoother and more efficient environment for them to deliver business value.

    Technology and services used

    • TypeScript
    • NextJS
    • React-Query
    • tRPC
    • Supabase
    • OpenAI
    • Anthropic SDK
    • Vercel AI SDK
    • Redis
    • Linear
    • Vercel
    • Sentry
    • TailwindCSS
    • Lodash
  • Idea Elaboration App

    THE STORY BEHIND THE CODE: A DEEP DIVE

    Summary with focus on technical aspects

    Developed an Idea Elaboration App focused on promoting Nuclear Energy through the generation of 130 actionable items. Leveraging information from 300 diverse sources, including books, blog posts, and interview transcriptions, the AI app ensured a comprehensive set of outcomes in alignment with project goals. The technical implementation involved using Jupyter Notebook for its flexibility and real-time feedback capabilities. Langchain's abstraction layer interfaced with the GPT-3.5-Turbo model, handling source loading, text conversion, and token splitting. OpenAI's ada embedding model and FAISS Vector Store facilitated the creation and storage of embeddings, while Langchain's Prompt Template functionality orchestrated the iterative prompt-response cycles. Technologies used include Langchain, FAISS Vector Store, and Python with Jupyter Notebook.

    App description

    The primary objective of this project was to generate a comprehensive set of actionable items aimed at promoting the use of Nuclear Energy, based diverse sources such as scientific articles and books. Simultaneously, a human expert researcher was involved in extracting actionable items. The results, encompassing both the actionable items generated by the AI app and those derived by the researcher, were considered to be commensurate. This assessment ensures that the outcomes from both approaches were deemed proportionate and in alignment with the project's overarching goals.

    The AI app generated 130 actionable items by leveraging information from approximately 300 knowledge sources. Among these sources, four were books, while the remainder consisted of blog posts and interview transcriptions.

    Technical details

    In alignment with the app's goal of creating a document with detailed actionable items, we opted for Jupyter Notebook as our coding environment, owing to its flexibility and real-time feedback capabilities. Also, Jupyter Notebook stores results of code blocks executions. That aids fast development iterations when testing how prompt tweaks affect responses. To interface with the GPT-3.5-Turbo model, we leveraged Langchain's abstraction layer over the OpenAI Python SDK.

    The workflow began with loading sources (PDF, .txt, .docx) and converting them into raw text using Langchain's Unstructured Document Loader. Subsequently, to address the token limit of 4096 tokens OpenAI's GPT-3.5-Turbo model had at the time, we employed Langchain's Token Text Splitter to break down the sources into manageable text chunks.

    Third step was to create embeddings from those chunks and store them in the vector store. We utilized OpenAI's ada embedding model for creating embeddings and FAISS (Facebook AI Similarity Search) vector store to store them locally on our computer. FAISS provides several similarity search methods that span a wide spectrum of usage trade-offs.

    The subsequent step involved feeding every chunk of text along with the instructions to the GPT-3.5-Turbo model to produce actionable items based on those chunks of text. After this initial prompt-response cycle, we obtained 130 actionable items from the GPT model, documented in a text file. Finally, in a subsequent prompt-response cycle, each generated actionable item from the first cycle, along with relevant chunks of text, was fed back to the GPT model. The model was prompted to elaborate on each actionable item based on the associated text chunks. This iterative process resulted in the production of a comprehensive document containing 130 elaborated actionable items.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • FAISS Vector Store
    • Python with Jupyter Notebook
  • TheaAI

    Health & wellness AI-powered iOS mobile app with engaging avatar-led chats

    OpenAI, Swift, SwiftUI, SwiftData, HealthKit, WidgetKit

    TheaAI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    Developed TheaAI iOS app, a personalized health & wellness application utilizing HealthKit and EventKit for tailored experiences, having OpenAI LLM models empower the avatar generated insights and conversations. Employed SwiftUI and SwiftData for App Store marketing advantages stemming from the alignment with Apple's latest technologies. Overcame user experience challenges present in the domain of health apps industry by introducing WidgetKit for presenting health insights. Focused on architectural design to manage code complexity and leveraged Swift's versatility for achieving high extensibility in data processing modules and standardized communication among the code modules.

    App description

    TheaAI is a fun and personalized health & wellness iOS mobile app with engaging avatar-led chats and journeys for a tailored wellness experience. It uses HealthKit to access user's health data and personalize the app experience and uses EventKit to personalize the actionable recommendations in terms of scheduling while respecting the users calendar.

    The essence of the app is the chatbot with different avatar coaching styles (fierce, cheerleading or educational) which are chosen based on the user's preference. Additionally, the app offers a widget that glanceably presents health insights generated based on data collected by HealthKit and EventKit.

    iOS app is live here.

    Technical details

    The decision to use the latest Apple technologies, SwiftUI and SwiftData, was driven by the marketing-related benefits associated with this choice. Specifically, the App Store tends to favor apps that leverage the latest technologies over those developed using older ones.

    Interesting challenges arose from the realization that notifications in health domain apps were not an effective means of reminding users. To address this issue, Apple's WidgetKit was employed to create an iOS widget. During the widget's development, numerous optimization challenges emerged in relation to the utilization of OpenAI models and the need to adhere to the hardware resource limitations of both iPhone and iPad.

    The architectural design of the system played a crucial role in the app's development, with a keen focus on keeping code complexity consistently monitored. This was particularly important as the entirety of the app's code is essentially within the realm of the "frontend mobile app code".

    Swift's versatile capabilities enable developers to write code in both functional and object-oriented paradigms. In the implementation of data processing-related software modules, a functional paradigm was employed, while the object-oriented paradigm was used in other parts of the system for data, interface, and communication standardization, aiming for high extensibility. The necessity for high extensibility in TheaAI system stemmed from the constant need for prompt tweaks and tests, relying on data retrieved from HealthKit and EventKit.

    Various cost optimization strategies for OpenAI were brainstormed, including the use of more economical models where the decrease in response quality is minimal. Additionally, considerations were made for storing certain Large Language Model (LLM) responses, particularly when their reuse wouldn't cause significant unwanted determinism in the app. For instance, health knowledge facts, answerable deterministically, could be retrieved from the database using semantic search methods.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Swift
    • SwiftUI
    • SwiftData
    • HealthKit
    • EventKit
    • WidgetKit
    • OpenAI and OpenAI SDK
    • Sentry
    • Figma
    • Trello
  • NES Health API

    THE STORY BEHIND THE CODE: A DEEP DIVE

    Summary with focus on technical aspects

    Developed the NES Health API, a Retrieval Augmented Generation (RAG) API tailored for the bioenergetics health and wellness industry. Utilized the FastAPI Python framework for expeditious API development with built-in Swagger support. The Langchain framework facilitated seamless integration with the Pinecone Vector Database and allowed for time-efficient implementation of streaming responses feature. FastAPI's REST design, coupled with Swagger support, ensured smooth collaboration with the client's development team. The API allows file loading into Pinecone, extracting raw text, segmenting into chunks, and generating embeddings stored in the Pinecone Vector Database, enabling users not only to filter and retrieve relevant information, but also get answers on their questions derived from the relevant information from the knowledge base. Implemented semantic similarity algorithms and dynamic prompts for precise answers, and deployed the API on AWS EC2.

    App description

    This project entails the development of a Retrieval Augmented Generation (RAG) API featuring diverse endpoints, each tailored to specific functionalities. It is designed for use by a prominent leader in the bioenergetics health and wellness industry. The primary goal is to provide clients with the ability to extract precise answers to specific questions from a comprehensive collection of texts, including various formats such as PDFs, DOCX, and TXT files.

    Diagrams:
    https://miro.com/app/board/uXjVME3pm0w=/?share_link_id=651464295180
    https://miro.com/app/board/uXjVM_PWE3c=/?share_link_id=870386612405

    Technical details

    The choice of the FastAPI Python framework was driven by its expeditious API development capabilities and built-in Swagger support. Following REST design best practices and having the built-in Swagger support resulted in the client's development team using the API in their system without any friction or need to reach out to our development team. Leveraging the Langchain framework, specifically its Python SDK, alongside the Pinecone Vector Database (and its Python SDK), proved instrumental in achieving the project's objectives.

    One endpoint facilitates loading of files into Pinecone. This process involves extracting raw text from files, utilizing various Document loaders from Langchain, segmenting the text into manageable chunks (guided by the token limit of 4096 tokens on OpenAI models which was the token limit at the time), and generating embeddings using OpenAI's ada embedding model. These embeddings, along with text they are derived from and user-defined metadata, are stored in the Pinecone Vector Database. The reasoning behind using Pinecone Vector Database was it's attractive feature of filtering embeddings based on user-defined metadata which was needed by the specification of the project.

    Post-file upload, API user gain the ability to filter text chunks based on metadata or retrieve relevant chunks using semantic similarity algorithm. Also we have implemented the option for editing or adding metadata to text chunks. Semantic similarity involves creating embeddings from the API user's questions and comparing them to existing embeddings of text chunks in Pinecone. API user can obtain answers to his questions by feeding queries and relevant text chunks from Pinecone into the GPT model. The identification of the most relevant chunks is facilitated through semantic similarity. To ensure the model derives answers exclusively from these chunks, dynamic prompts are employed. This involves incorporating instructions within the prompt that guide the model to base its response exclusively on the specified sources. For this purpose, Langchain's Prompt Template option was integrated into the solution (instructions remain consistent for each question, with variations occurring in the questions themselves and the corresponding chunks of information).

    For testing purposes we have uploaded the API on AWS EC2 instance.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • Pinecone Vector Database
    • Python with FastAPI
    • AWS
  • AI Chatbot Arena - Ellie AI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    App description

    Currently in development, this project aims to empower supporters of AI while also addressing concerns from anti-AI advocates. The overarching objective is to present compelling (counter)arguments highlighting the advantages of AI usage. After discussing what the specific format of the app will be, we have decided it to be a chatbot arena, where the user asks questions and gets two distinct answers to those questions from both perspectives regarding the Risk of using AI. After the initial phase of the project, where we have been constructing a comprehensive knowledge graph that encompasses arguments both in favor of and against the assertion that "AI is safe to use", we have started building UI for chatbot-arena.

    Technical details

    The objectives we have achieved so far are:

    • Produced around 300 arguments that either support or oppose the main assertion. This is done by loading multiple sources of text that deal with topics of risk/benefits of using AI (books, articles, podcast transcripts of leading philosophers/thinkers in this area like David Deutsch, Yan LeCunn…). This part is achieved by using document loaders available in the Langchain framework. These sources are then splitted in manageable chunks of text using Langchain's Character Text Splitter. After this step we have prompted the GPT-4 model to extract relevant arguments from those chunks of text. For this purpose we have again used Langchain which provides an abstraction layer over OpenAI's SDK that simplifies creation of dynamic prompts with its Prompt Templating functionality.
    • Created embeddings of the arguments and stored them in the Supabase, along with text form of the arguments and other data so we can filter them (is an argument optimistic or pessimistic regarding use of an AI). We have used OpenAI's ada embedding model for creating embeddings and PGVector extension for PostgreSQL (Supabase is built on top of PostgreSQL) for storing embeddings. We have chosen PostgreSQL(Supabase) for the vector store because it's stable and provides both semantic similarity and SQL functionalities.
    • Since we had a couple of very similar or same arguments we have run the algorithm that will identify most similar arguments and merge them. We have used a threshold of 94% semantic similarity score for identifying most similar arguments.
    • Matched the best opposing argument for every pessimistic argument. We have done this by prompting the GPT-4 model to create the best opposing argument for every pessimistic argument, then created the embedding of GPT-4 produced argument and compared those embeddings with embeddings of optimistic arguments already stored in Supabase.
    • We have set up the project infrastructure for the chatbot arena. For this purpose we have utilized the Next.js framework. We have set up 2 distinct API routes. Each route generates an answer to users' question from a certain perspective (optimistic towards the use of AI or pessimistic towards the use of AI). To achieve this we have implemented dynamic prompts. Each time a user asks a question, we create embeddings from that question and run the similarity search on the embeddings of arguments stored in Supabase. This way we detect the most relevant arguments to the user's question from both perspectives (optimistic and pessimistic). After relevant arguments are detected, they are inserted into two prompts (one for the optimistic chatbot, the second for the pessimistic) along with distinct instructions for both chatbots. After the creation of the dynamic prompts, they are sent to OpenAI's GPT-4-Turbo model. Generated answers are then streamed to the frontend. For streaming answers, Vercel's AI package was utilized.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Langchain
    • Supabase (Python SDK)
    • PG Vector PostgreSQL Extension
    • Next.js 14
    • Vercel AI
  • AI IQ Solver

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    AI IQ Solver is an application that assesses the problem-solving capabilities of artificial intelligence models in the context of IQ puzzles. Developed using the Next.js framework, the application integrates two software development kits (SDKs), OpenAI and Replicate, and is hosted on the Railway platform.

    App description

    AI IQ Solver is an application designed to assess the problem-solving capabilities of artificial intelligence models in the context of IQ puzzles.

    Link to the app: https://ai-iq-solver-production.up.railway.app/
    YouTube video featuring the app: https://youtu.be/QrSCwxrLrRc?si=2YsbFE2-a5sBDYkB
    Wireframe/Diagram: https://miro.com/app/board/uXjVMwx38UA=

    Technical details

    Developed using the Next.js framework, the application incorporates an API that integrates two software development kits (SDKs), namely OpenAI and Replicate. The OpenAI's Node SDK is employed to establish communication with the GPT-4 model, while the Replicate's Node SDK is utilized for interfacing with the Mini-GPT-4 multimodal model. The reason for using two models is that, at the time, OpenAI models were not capable of processing multimodal inputs. Users are required to submit a collection of .txt and image files (.jpeg, .png) containing IQ puzzles. The application categorizes the puzzles into textual and visual formats, directing textual puzzles to the GPT-4 model and visual puzzles (comprising both text and image components) to the Mini-GPT-4 model. The responses generated by the models are subsequently presented within the application interface. The user interface of the app is built using the Material UI library. For deployment, the AI IQ Solver is hosted on the Railway platform, ensuring efficient and scalable access.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • Next.js
    • OpenAI Node SDK
    • Replicate Node SDK
    • GPT-4
    • Mini-GPT-4
    • Railway
  • AlexAI

    Instant answers to complex energy and climate questions with AlexAI

    OpenAI, Langchain, Python, AsyncIO, Supabase, Swift, Railway, and others

    AlexAI

    THE STORY BEHIND THE CODE: A DEEP DIVE
    SHOWCASING THE APP

    Summary with focus on technical aspects

    Developed AlexAI, an AI-powered chatbot that provides instant answers to complex energy and climate questions. The backend was built with FastAPI and deployed seamlessly on Railway cloud for scalability, whereas the frontend was customly designed. Supabase was a choice for managing the database, streamlining authentication and cloud storage. The Langchain framework optimized LLM integration for text summarization without a need for some low-level custom implementations. The iOS app was developed, initially leveraging WebView for the sake of going early to the market, later decided to transition to a fully-fledged natively coded mobile app. Technologies included OpenAI, Langchain, Python, AsyncIO, Supabase, Swift, Railway, and others.

    App description

    Instant answers to your questions on energy, environmental, and climate issues—based on the mind and work of Alex Epstein, pro-human philosopher and energy expert. AlexAI uses a custom GPT model to be able to handle even the most complex energy and climate questions. It is using more 150 processed sources including Alex's blog posts, interviews, podcasts and books.

    AlexAI critically analyzes every user message and can identify and re-frame questions with bad underlying assumptions. The example that reflects quality of the chatbot responses is seen in the answer below:

    Web App is live on: https://alexgpt.ai/
    iPhone and iPad app is live on: https://apps.apple.com/gb/app/alexai/id6448963081
    Blog post about AlexAI: https://alexepstein.substack.com/p/your-exclusive-early-access-to-alexai?utm_campaign=email-post&r=7b6oh&utm_source=substack&utm_medium=email

    Technical details

    The app is written as a FastAPI backend which incorporates server-side rendering functionality. It is deployed using Railway cloud service which offers effortless scalability out-of-the box and fast deployment iterations. AlexAI implementation utilizes Supabase as a service which takes care of hosting database and interacting with it by means of REST API. It also speeds up the development process by offering various solutions for authentication, authorization and cloud file storage.

    The frontend is written using plain HTML, CSS and JavaScript and it based on custom designs provided by a UI designer. There was no need for fully-fledged frontend frameworks or CSS frameworks because there was no complex user interactivity with the web app.

    The design requirements underwent several adjustments during the development process, and new features were constantly being brainstormed. Effectively managing these changes without notable disruptions and the need for rewriting code became a continuous attestation of the development team's commitment to clean coding practices. This approach facilitated quick iterations, which were crucial given the fast-paced advancements in state-of-the-art LLM (Large Language Model) development and related technologies. One notable instance of progress in LLMs and related technologies impacting the development process are the changes in the Langchain framework. This framework acts as a sophisticated abstraction layer, streamlining the utilization of LLM technologies in various applications. New Langchain features were always on the development team's radar because abstracting away lower level data processing implementations is more time effective than writing custom code and also reduces code complexity. For example, the development team did not customly implement map reduce data processing technique, but rather used Langchain's implementation, when experimenting with different text summarization methods.

    The communications with the designer were assisted via tool named Figma with which the friction between developers receiving designer's deliverables and feedback was decreased to a minimum. Initially, Trello was a choice of project management platform, but the curiosity of the team lead to using Linear as a new emerging technology. It helped automate tickets creation through simple Slack messaging and tickets closures through GitHub pull requests.

    In addition to building the web app, an iOS app was also built. Because the web app architecture was built around service-side rendering paradigm, the fastest way to get the iOS app to market was utilizing the WebView functionality offered by Apple's WebKit framework. Due to concerns regarding the inefficiency, extensibility, maintenance and seeing infrastructure and code complexity rising up, associated with the current approach, a decision has been made to develop a fully-fledged mobile app in the future. This will be accomplished either through native Swift code without the use of WebView or by employing cross-platform solutions such as React Native or Flutter.

    Diagrams

    🔍 Hover To Zoom In | 👆 To Open In Full Size

    Technology and services used

    • OpenAI
    • Langchain
    • Python with FastAPI
    • AsyncIO
    • Supabase (which includes Postgres database)
    • Swift
    • RevenueCat
    • ChromaDB vector database
    • Railway
    • Sentry
    • Figma
    • Trello and Linear
About Me

I'm Ismail

Full-Stack Engineer

Solutions Expert

From the first meeting, first line of code to a fully tested deployed app, I can lead, design and build throughout the entire process.

I have ample experience working directly with founders/entrepreneurs building successful innovative applications. I am proud of my client's success and that I have helped them achieve their dreams.

Technology Stack Master List

I am committed to continuous learning and harness language-neutral coding dexterity, allowing me to easily adapt to new technologies. I also have the benefit of working in an office with four Upwork developers. We have worked together for years having met years ago at the same IT school. You can work with us individually or as a team. Either way, you have the benefit of our collective brain power as we always helping each other as needed.

Here is a list of the frameworks, technologies and libraries in which one or more of us are fully proficient:

  • React (javascript)
  • Node.js (javascript)
  • Express (javascript)
  • Next.js (javascript)
  • FastAPI (python)
  • MySQL
  • PostgreSQL
  • MongoDB
  • InfluxDB
  • Mongoose (javascript)
  • Langchain (python and javascript)
  • Redux (React/javascript/typescript)
  • jQuery (javascript)
  • Material UI (React/javascript)
  • Supabase
  • Pinecone
  • Typescript (javascript)
  • PDFKit (javascript)
  • Sequelize (javascript)
  • SwiftUI (swift)
  • SwiftData (swift)
  • Next UI (Next)
  • Chakra UI (React/Next.js)
  • Bootstrap (html & css & javascript)
  • React Native (javascript)
  • AWS
  • Railway
  • Vercel
  • Serverless
  • Selenium (python)
  • Mocha.js (javascript)
  • Electron (javascript)
  • Jest (javascript)
  • Loopback (javascript)
  • d3.js (javascript)
  • Lodash (javascript)
  • Azure
  • Chart.js (javascript)
  • Tailwind (html & css)
  • TailwindUI (react)
  • Moment.js (javascript)
  • Django (python)
  • Laravel (php)

This list is not exhaustive. We are always on the lookout for new technologies and harnessing them as best serves our client. Especially with AI, new tools seem to be appearing each week. For example, at the time of this website's latest update, Google just offered a new service Conversational Agents as part of their Customer Engagement Suite which we will definitely use for current and upcoming projects. Learning new tools and new technologies is essential to being a productive and responsible developer. I welcome the challenge.

My Story

I've always been fascinated by technology, problem-solving, and science—whether it was as a child exploring the mechanics of a stethoscope, excelling as a high school state physics champion and International Physics Olympiad contestant, or through my ongoing career in software engineering. In 2017, I joined BILD-IT, an 18-month professional software training program funded by the British Embassy. The program's exceptional quality and the dedication of my instructors inspired me so much that I continued to contribute as a teacher after completing the training while also launching my career as a freelance software engineer. To expand my knowledge and enhance my engineering expertise, I am currently studying electrical engineering with a focus on intelligent systems and automation.

My interest in psychology and medicine drives my belief that engineering is a tool—not an end in itself. It's through understanding human nature and the world we live in that technology can truly be used to solve real-world problems and improve lives. I see this as the intersection of theory and action, where knowledge becomes a catalyst for meaningful change. In my freelance projects, this belief drives me to deeply understand the domain of each problem I'm solving, ensuring that the solutions I create are not just technically sound but also aligned with real-world needs.

In 2021, I joined TIKA Technologies, a freelancers' collective that began with a core group of four developers. Drawing from a network of talented professionals, many of whom are graduates of BILD-IT, we tackle complex coding challenges and collaborate on delivering high-quality solutions to our clients. At the same time, we dedicate part of our work to mentoring and supporting the next generation of software engineers, helping them start their careers in the tech industry.

Here Is Me Teaching Algorithms In 2023

2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2 2buy2
Testimonial

What My Clients Say

Contact

Let's Work Together

If you are an entrepreneur, a founder of a small to medium sized private business, if you have a new idea or a new feature you want to build, I would enjoy discussing your current challenge and the possibility of working with you.

(This personal portfolio page was created specifically for Upwork clients)