4/24/24

Generative AI: Create Code from GitHub User Stories - Large Language Models

Overview

This presentation explores the potential of Generative AI, specifically Large Language Models (LLMs), for streamlining software development by generating code directly from user stories written in GitHub. We delve into benefits like increased developer productivity and discuss techniques like Prompt Engineering and user story writing for effective code generation. Utilizing Python and AI, we showcase a practical example of reading user stories, generating code, and updating the corresponding story in GitHub, demonstrating the power of AI in streamlining software development.

#BuildwithAI Series

Generative AI: Create Code from GitHub User Stories - LLM

  • Follow this GitHub repo during the presentation: (Give it a star and follow the project)

👉 https://github.com/ozkary/ai-engineering

  • Read more information on my blog at:

YouTube Video

Video Agenda

Agenda:

  • Introduction to LLMs and their Role in Code Generation
  • Prompt Engineering - Guiding the LLM
  • Writing User Stories for Code Generation
  • Introducing Gemini AI and AI Studio
  • Python Implementation - A Practical Example using VS Code
    • Reading user stories from GitHub.
    • Utilizing Gemini AI to generate code based on the user story.
    • Updating the corresponding GitHub user story with the generated code.
  • Conclusion: Summarize the key takeaways of the article, emphasizing the potential of Generative AI in code creation.

Why join this session?

  • Discover how Large Language Models (LLMs) can automate code generation, saving you valuable time and effort.
  • Learn how to craft effective prompts that guide LLMs to generate the code you need.
  • See how to write user stories that bridge the gap between human intent and AI-powered code creation.
  • Explore Gemini AI and AI Studio
  • Witness Code Generation in Action: Experience a live demonstration using VS Code, where user stories from GitHub are transformed into code with the help of Gemini AI.

Presentation

What are LLM Models - Not Skynet

Large Language Model (LLM) refers to a class of Generative AI models that are designed to understand prompts and questions and generate human-like text based on large amounts of training data. LLMs are built upon Foundation Models which have a focus on language understanding.

Common Tasks

  • Text and Code Generation: LLMs can generate code snippets or even entire programs based on specific requirements

  • Natural Language Processing (NLP): Understand and generate human language, sentiment analysis, translation

  • Text Summarization: LLMs can condense lengthy pieces of text into concise summaries

  • Question Answering: LLMs can access and process information from various sources to answer questions, making a great fit for chatbots

Generative AI: Foundation Models

Training LLM Models - Secret Sauce

Models are trained using a combination of machine learning and deep learning. Massive datasets of text and code are collected, cleaned, and fed into complex neural networks with multiple layers. These networks iteratively learn by analyzing patterns in the data, allowing them to map inputs like user stories to desired outputs such as code generation.

Training Process:

  • Data Collection: Sources from books, articles, code repositories, and online conversations

  • Preprocessing: Data cleaning and formatting for the ML algorithms to understand it effectively

  • Model Training: The neural network architecture is trained on the data. The network adjusts its internal parameters to learn how to map input data (user stories) to desired outputs (code snippets)

  • Fine-tuning: Fine-tune models for specific tasks like code generation, by training the model on relevant data (e.g., specific programming languages, coding conventions).

Generative AI: Neural-Network

Transformer Architecture - Not Autobots

Transformer is a neural network architecture that excels at processing long sequences of text by analyzing relationships between words, no matter how far apart they are. This allows LLMs to understand complex language patterns and generate human-like text.

Components

  • Encoder: Process the input (use story) by using multiple encoder layers with self-attention Mechanism to analyze the relationship between words

  • Decoder: Uses the encoded information and its own attention mechanism to generate the output text (like code), ensuring it aligns with the text.

  • Attention Mechanism: Enables the model to effectively focus on the most important information for the task at hand, leading to improved NLP and generation capabilities.

Generative AI: Transformers encoder decoder attention mechanism

👉 Read: Attention is all you need by Google, 2017

Prompt Engineering - What is it?

Prompt engineering is the process of designing and optimizing prompts to better utilize LLMs. Well described prompts can help the AI models better understand the context and generate more accurate responses.

Features

  • Clarity and Specificity: Effective prompts are clear, concise, and specific about the task or desired response

  • Task Framing: Provide background information, specifying the desired output format (e.g., code, email, poem), or outlining specific requirements

  • Examples and Counter-Examples: Including relevant examples and counterexamples within the prompt can further guide the LLM

  • Instructional Language: Use clear and concise instructions to improve the LLM's understanding of what information to generate

User Story Prompt:

As a web developer, I want to create a React component with TypeScript for a login form that uses JSDoc for documentation, hooks for state management, includes a "Remember This Device" checkbox, and follows best practices for React and TypeScript development so that the code is maintainable, reusable, and understandable for myself and other developers, aligning with industry standards.

Needs:

- Component named "LoginComponent" with state management using hooks (useState)
- Input fields:
    - ID: "email" (type="email") - Required email field (as username)
    - ID: "password" (type="password") - Required password field
- Buttons:
    - ID: "loginButton" - "Login" button
    - ID: "cancelButton" - "Cancel" button
- Checkbox:
    - ID: "rememberDevice" - "Remember This Device" checkbox

Generate Code from User Stories - Practical Use Case

In the Agile methodology, user stories are used to capture requirements, tasks, or a feature from the perspective of a role in the system. For code generation, developers can write user stories to capture the context, requirements and technical specifications necessary to generate code with AI.

Code Generation Flow:

  • 1 User Story: Get the GitHub tasks with user story information

  • 2 LLM Model: Send the user story as a prompt to the LLM Model

  • 3 Generated Code: Send the generated code back to GitHub as a comment for a developer to review

👉 LLM generated code is not perfect, and developers should manually review and validate the generated code.

Generative AI: Generate Code Flow

How does LLMs Impact Development?

LLMs accelerate development by generating code faster, leading to shorter development cycles. They also automate documentation and empower exploration of complex algorithms, fostering innovation.

Features:

  • Code Completion: Analyze your code and suggest completions based on context

  • Code Synthesis: Describe what you want the code to do, and the LLM can generate the code

  • Code Refactoring: Analyze your code and suggest improvements for readability, performance, or best practices.

  • Documentation: Generate documentation that explains your code's purpose and functionality

  • Code Translation: Translate code snippets between different programming languages

Generative AI: React Code Generation

👉 Security Concerns: Malicious actors could potentially exploit LLMs to generate harmful code.

What is Gemini AI?

Gemini is Google's next-generation large language model (LLM), unlocking the potential of Generative AI. This powerful tool understands and generates various data formats, from text and code to images and audio.

Components:

  • Gemini: Google's next-generation multimodal LLM, capable of understanding and generating various data formats (text, code, images, audio)

  • Gemini API: Integrate Gemini's into your applications with a user-friendly API

  • Google AI Studio: A free, web-based platform for prototyping with Gemini aistudio.google.com

    • Experiment with prompts and explore Gemini's capabilities
      • Generate creative text formats, translate languages
    • Export your work to code for seamless integration into your projects

Generative AI: Google AI Studio

👉 Multimodal LLMs can handle text, images, video, code

Generative AI for Development Summary

LLM plays a crucial role in code generation by harnessing its language understanding and generative capabilities. People in roles like developers, data engineers, scientists and others can utilize AI models to swiftly generate scripts in various programming languages, streamlining their programming tasks.

Common Tasks:

  • Code generation
  • Natural Language Processing (NLP)
  • Text summarization
  • Question answering

Architecture:

  • Multi-layered neural networks
  • Training process

    Transformer Architecture:

  • Encoder-Decoder structure
  • Attention mechanism

Prompt Engineering:

  • Crafting effective prompts with user stories

    Code Generation from User Stories:

    • Leveraging user stories for code generation

Thanks for reading.

Send question or comment at Twitter @ozkary

👍 Originally published by ozkary.com

4/3/24

Architecting Insights: Data Modeling and Analytical Foundations - Data Engineering Process Fundamentals

Overview

A Data Warehouse is an OLAP system, which serves as the central data repository for historical and aggregated data. A data warehouse is designed to support complex analytical queries, reporting, and data analysis for Big Data use cases. It typically adopts a denormalized entity structure, such as a star schema or snowflake schema, to facilitate efficient querying and aggregations. Data from various OLTP sources is extracted, loaded and transformed (ELT) into the data warehouse to enable analytics and business intelligence. The data warehouse acts as a single source of truth for business users to obtain insights from historical data.

In this technical presentation, we embark on the next chapter of our data journey, delving into data modeling and building our data warehouse.

Data Engineering Process Fundamentals - Data Warehouse Design

  • Follow this GitHub repo during the presentation: (Give it a star)

👉 https://github.com/ozkary/data-engineering-mta-turnstile

  • Read more information on my blog at:

👉 https://www.ozkary.com/2023/03/data-engineering-process-fundamentals.html

YouTube Video

Video Agenda

Building on our previous exploration of data pipelines and orchestration, we now delve into the pivotal phase of data modeling and analytics. In this continuation of our data engineering process series, we focus on architecting insights by designing and implementing data warehouses, constructing logical and physical models, and optimizing tables for efficient analysis. Let's uncover the foundational principles driving effective data modeling and analytics.

Agenda:

  • Operational Data Concepts:

    • Explanation of operational data and its characteristics.
    • Discussion on data storage options, including relational databases and NoSQL databases.
  • Data Lake for Data Staging:

    • Introduction to the concept of a data lake as a central repository for raw, unstructured, and semi-structured data.
    • Explanation of data staging within a data lake for ingesting, storing, and preparing data for downstream processing.
    • Discussion on the advantages of using a data lake for data staging, such as scalability and flexibility.
  • Data Warehouse for Analytical Data:

    • Overview of the role of a data warehouse in storing and organizing structured data for analytics and reporting purposes.
    • Discussion on the benefits of using a data warehouse for analytical queries and business intelligence.
  • Data Warehouse Design and Implementation:

    • Introduction to data warehouse design principles and methodologies.
    • Explanation of logical models for designing a data warehouse schema, including conceptual and dimensional modeling.
  • Star Schema:

    • Explanation of the star schema design pattern for organizing data in a data warehouse.
    • Discussion on fact tables, dimension tables, and their relationships within a star schema.
    • Explanation of the advantages of using a star schema for analytical querying and reporting.
  • Logical Models:

    • Discussion on logical models in data warehouse design.
    • Explanation of conceptual modeling and entity-relationship diagrams (ERDs).
  • Physical Models - Table Construction:

    • Discussion on constructing tables from the logical model, including entity mapping and data normalization.
    • Explanation of primary and foreign key relationships and their implementation in physical tables.
  • Table Optimization Index and Partitions:

    • Introduction to table optimization techniques for improving query performance.
    • Explanation of index creation and usage for speeding up data retrieval.
    • Discussion on partitioning strategies for managing large datasets and enhancing query efficiency.
  • Incremental Strategy:

    • Introduction to incremental loading techniques for efficiently updating data warehouses.
    • Explanation of delta processing.
    • Discussion on the benefits of incremental loading in reducing processing time and resource usage.
  • Orchestration and Operations:

    • Tools and frameworks for orchestrating data pipelines, such as dbt.
    • Discussion on the importance of orchestration and monitoring the data processing tasks.
    • Policies to archive data in blob storage.

Why join this session?

  • Learn analytical data modeling essentials.
  • Explore schema design patterns like star and snowflake.
  • Optimize large dataset management and query efficiency.
  • Understand logical and physical modeling strategies.
  • Gain practical insights and best practices.
  • Engage in discussions with experts.
  • Advance your data engineering skills.
  • Architect insights for data-driven decisions.

Presentation

Data Engineering Overview

A Data Engineering Process involves executing steps to understand the problem, scope, design, and architecture for creating a solution. This enables ongoing big data analysis using analytical and visualization tools.

Data Engineering Process Fundamentals - Operational Data

Topics

  • Operational Data
  • Data Lake
  • Data Warehouse
  • Schema and Data Modeling
  • Data Strategy and Optimization
  • Orchestration and Operations

Follow this project: Star/Follow the project

👉 Data Engineering Process Fundamentals

Operational Data

Operational data (OLTP) is often generated by applications, and it is stored in transactional relational databases like SQL Server, Oracle and NoSQL (JSON) databases like CosmosDB, Firebase. This is the data that is created after an application saves a user transaction like contact information, a purchase or other activities that are available from the application.

Features

  • Application support and transactions
  • Relational data structure and SQL or document structure NoSQL
  • Small queries for case analysis

Not Best For:

  • Reporting and analytical systems (OLAP)
  • Large queries
  • Centralized Big Data system

Data Engineering Process Fundamentals - Operational Data

Data Lake - From Ops to Analytical Data Staging

A Data Lake is an optimized storage system for Big Data scenarios. The primary function is to store the data in its raw format without any transformation. Analytical data is the transaction data that has been extracted from a source system via a data pipeline as part of the staging data process.

Features:

  • Store the data in its raw format without any transformation
  • This can include structure data like CSV files, unstructured data like JSON and XML documents, or column-base data like parquet files
  • Low Cost for massive storage power
  • Not Designed for querying or data analysis
  • It is used as external tables by most systems

Data Engineering Process Fundamentals - Data Lake for Staging the data

Data Warehouse - Staging to Analytical Data

A Data Warehouse, OLAP system, is a centralized storage system that stores integrated data from multiple sources. The system is designed to host and serve Big Data scenarios with lower operational cost than transaction databases, but higher costs than a Data Lake.

Features:

  • Stores historical data in relational tables with an optimized schema, which enables the data analysis process
  • Provides SQL support to query and transform the data
  • Integrates external resources on Data Lakes as external tables
  • The system is designed to host and serve Big Data scenarios.
  • Storage is more expensive
  • Offloads archived data to Data Lakes

Data Engineering Process Fundamentals - Data Warehouse Analytical Data

Data Warehouse - Design and Implementation

In the design phase, we lay the groundwork by defining the database system, schema model, logical data models, and technology stack (SQL, Python, frameworks and tools) required to support the data warehouse’s implementation and operations.

In the implementation phase, we focus on converting logical data models into a functional system. By creating concrete structures like dimension and fact tables and performing data transformation tasks, including data cleansing, integration, and scheduled batch loading, we ensure that raw data is processed and unified for analysis.

Data Engineering Process Fundamentals - Data Warehouse Design

Design - Schema Modeling

The Star and Snowflake Schemas are two common data warehouse modeling techniques. The Star Schema consist of a central fact table is connected to multiple dimension tables via foreign key relationships. The Snowflake Schema is a variation of the Star Schema, but with dimension tables that are further divided into multiple related tables.

What to use:

  • Use the Star Schema when query performance is a primary concern, and data model simplicity is essential

  • Use the Snowflake Schema when storage optimization is crucial, and the data model involves high-cardinality dimension attributes with potential data redundancy

Data Engineering Process Fundamentals - Data Warehouse Schema Model

Data Modeling

Data modeling lays the foundation for a data warehouse. It starts with modeling raw data into a logical model outlining the data and its relationships, with a focus based on data requirements. This model is then translated, using DDL, into the specific views, tables, columns (data types), and keys that make up the physical model of the data warehouse, with a focus on technical requirements.

Data Engineering Process Fundamentals - Data Warehouse Data Model

Data Optimization to Deliver Performance

To achieve faster queries, improve performance and reduce resource cost, we need to efficiently organize our data. Two key techniques for accomplishing this are data partitioning and data clustering.

  • Data Partitioning: Imagine dividing your data table into smaller, self-contained segments based on a specific column (e.g., date). This allows the DW to quickly locate and retrieve only the relevant data for your queries, significantly reducing scan times.

  • Data Clustering: Allows us to organize the data within each partition based on another column (e.g., Station). This groups frequently accessed data together physically, leading to faster query execution, especially for aggregations or filtering based on the clustered column.

Data Engineering Process Fundamentals - Data Warehouse DDL Script

Data Transformation and Incremental Strategy

The data transformation phase is a critical stage in a data warehouse project. This phase involves several key steps, including data extraction, cleaning, loading, data type casting, use of naming conventions, and implementing incremental loads to continuously insert the new information since the last update via batch processes.

Data Engineering Process Fundamentals - Data Warehouse Data Lineage

  • Data Lineage: Tracks the flow of data from its origin to its destination, including all the intermediate processes and transformations that it undergoes.

Orchestration and Operations

Effective orchestration and operation are the keys of a reliable and efficient data project. They streamline data pipelines, ensure data quality, and minimize human intervention. This translates to faster development cycles, reduced errors, and improved overall data management.

  • Version Control and CI/CD with GitHub: Enables development, automated testing, and seamless deployment of data pipelines.

  • Documentation: Maintain clear and comprehensive documentation covering data pipelines, data quality checks, scheduling, data archiving policies

  • Scheduling and Automation: Automates repetitive tasks, such as data ingestion, transformation, and archiving processes,

  • Monitoring and Notification: Provides real-time insights into pipeline health, data quality, and archiving success

Data Engineering Process Fundamentals - Data Warehouse Data Lineage

Summary

Before we can move data into a data warehouse system, we explore two pivotal phases for our data warehouse solution: design and implementation. In the design phase, we lay the groundwork by defining the database system, schema and data model, and technology stack required to support the data warehouse’s implementation and operations. This stage ensures a solid infrastructure for data storage and management.

In the implementation phase, we focus on converting conceptual data models into a functional system. By creating concrete structures like dimension and fact tables and performing data transformation tasks, including data cleansing, integration, and scheduled batch loading, we ensure that raw data is processed and unified for analysis.

Thanks for reading.

Send question or comment at Twitter @ozkary

👍 Originally published by ozkary.com