소장자료
LDR | 00000nam a2200000 a | ||
001 | 0100825090▲ | ||
005 | 20241105100214▲ | ||
007 | ta ▲ | ||
008 | 241002s2024 enka b 001 eng d▲ | ||
020 | ▼a9781835460825▼q(pbk.)▲ | ||
040 | ▼a221016▼c221016▲ | ||
082 | 0 | 4 | ▼a006.3▼223▲ |
090 | ▼a006.3▼bR696g▲ | ||
100 | 1 | ▼aRodriguez, Carlos.▲ | |
245 | 1 | 0 | ▼aGenerative AI foundations in Python :▼bdiscover key techniques and navigate modern challenges in LLMs /▼cby Carlos Rodriguez ; foreword by Samira Shaikh.▲ |
260 | ▼aBirmingham, UK :▼bPackt Publishing,▼c2024.▲ | ||
300 | ▼axvii, 171 p. :▼bill. ;▼c24 cm.▲ | ||
336 | ▼atext▼btxt▼2rdacontent▲ | ||
337 | ▼aunmediated▼bn▼2rdamedia▲ | ||
338 | ▼avolume▼bnc▼2rdacarrier▲ | ||
504 | ▼aIncludes bibliographical references and index.▲ | ||
505 | ▼aPreface -- Part 1 : Foundations of Generative AI and the Evolution of Large Language Models -- Chapter 1 : Understanding Generative AI : An Introduction -- Generative AI -- Distinguishing generative AI from other AI models -- Briefly surveying generative approaches -- Clarifying misconceptions between discriminative and generative paradigms -- Choosing the right paradigm -- Looking back at the evolution of generative AI -- Overview of traditional methods in NLP -- Arrival and evolution of transformer-based models -- Development and impact of GPT-4 -- Looking ahead at risks and implications -- Introducing use cases of generative AI -- The future of generative AI applications -- Summary -- References -- Chapter 2 : Surveying GenAI Types and Modes : An Overview of GANs, Diffusers, and Transformers -- Understanding General Artificial Intelligence (GAI) Types -- distinguishing features of GANs, diffusers, and transformers -- Deconstructing GAI methods -- exploring GANs, diffusers, and transformers -- A closer look at GANs -- A closer look at diffusion models -- A closer look at generative transformers -- Applying GAI models -- image generation using GANs, diffusers, and transformers -- Working with Jupyter Notebook and Google Colab -- Stable diffusion transformer -- Scoring with the CLIP model -- Summary -- References -- Chapter 3 : Tracing the Foundations of Natural Language Processing and the Impact of the Transformer -- Early approaches in NLP -- Advent of neural language models -- Distributed representations -- Transfer Learning -- Advent of NNs in NLP -- The emergence of the Transformer in advanced language models -- Components of the transformer architecture -- Sequence-to-sequence learning -- Evolving language models -- the AR Transformer and its role in GenAI -- Implementing the original Transformer -- Data loading and preparation -- Tokenization -- Data tensorization -- Dataset creation -- Embeddings layer -- Positional encoding -- Multi-head self-attention -- FFN -- Encoder layer -- Encoder -- Decoder layer -- Decoder -- Complete transformer -- Training function -- Translation function -- Main execution -- Summary -- References -- Chapter 4 : Applying Pretrained Generative Models : From Prototype to Production -- Prototyping environments -- Transitioning to production -- Mapping features to production setup -- Setting up a production-ready environment -- Local development setup -- Visual Studio Code -- Project initialization -- Docker setup -- Requirements file -- Application code -- Creating a code repository -- CI/CD setup -- Model selection : choosing the right pretrained generative model -- Meeting project objectives -- Model size and computational complexity -- Benchmarking -- Updating the prototyping environment -- GPU configuration -- Loading pretrained models with LangChain -- Setting up testing data -- Quantitative metrics evaluation -- Alignment with CLIP -- Interpreting outcomes -- Responsible AI considerations -- Addressing and mitigating biases -- Transparency and explainability -- Final deployment -- Testing and monitoring -- Maintenance and reliability -- Summary -- Part 2 : Practical Applications of Generative AI -- Chapter 5 : Fine-Tuning Generative Models for Specific Tasks -- Foundation and relevance : an introduction to fine-tuning -- PEFT -- LoRA -- AdaLoRA -- In-context learning -- Fine-tuning versus in-context learning -- Practice project : Fine-tuning for Q&A using PEFT -- Background regarding question-answering fine-tuning -- Implementation in Python -- Evaluation of results -- Summary -- References -- Chapter 6 : Understanding Domain Adaptation for Large Language Models -- Demystifying domain adaptation : understanding its history and importance -- Practice project : Transfer learning for the finance domain -- Training methodologies for financial domain adaptation -- Evaluation and outcome analysis : the ROUGE metric -- Chapter 7 : Mastering the Fundamentals of Prompt Engineering -- The shift to prompt-based approaches -- Basic prompting : guiding principles, types, and structures -- Guiding principles for model interaction -- Prompt elements and structure -- Elevating prompts : iteration and influencing model behaviors -- LLMs respond to emotional cues -- Effect of personas -- Situational prompting or role-play -- Advanced prompting in action : few-shot learning and prompt chaining -- Practice project : Implementing RAG with LlamaIndex using Python -- Summary -- References -- Chapter 8 : Addressing Ethical Considerations and Charting a Path Toward Trustworthy Generative AI -- Ethical norms and values in the context of generative AI -- Investigating and minimizing bias in generative LLMs and generative image models -- Constrained generation and eliciting trustworthy outcomes -- Constrained generation with fine-tuning -- Constrained generation through prompt engineering -- Understanding jailbreaking and harmful behaviors -- Practice project : Minimizing harmful behaviors with filtering -- Summary -- References.▲ | ||
520 | ▼aBegin your generative AI journey with Python as you explore large language models, understand responsible generative AI practices, and apply your knowledge to real-world applications through guided tutorials Key Features Gain expertise in prompt engineering, LLM fine-tuning, and domain adaptation Use transformers-based LLMs and diffusion models to implement AI applications Discover strategies to optimize model performance, address ethical considerations, and build trust in AI systems Purchase of the print or Kindle book includes a free PDF eBook Book Description The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You'll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you'll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you'll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly. What you will learn Discover the fundamentals of GenAI and its foundations in NLP Dissect foundational generative architectures including GANs, transformers, and diffusion models Find out how to fine-tune LLMs for specific NLP tasks Understand transfer learning and fine-tuning to facilitate domain adaptation, including fields such as finance Explore prompt engineering, including in-context learning, templatization, and rationalization through chain-of-thought and RAG Implement responsible practices with generative LLMs to minimize bias, toxicity, and other harmful outputs Who this book is for This book is for developers, data scientists, and machine learning engineers embarking on projects driven by generative AI. A general understanding of machine learning and deep learning, as well as some proficiency with Python, is expected. -- Provided by publisher.▲ | ||
650 | 0 | ▼aArtificial intelligence.▲ | |
650 | 0 | ▼aNatural language processing (Computer science).▲ | |
650 | 0 | ▼aPython (Computer program language).▲ | |
650 | 0 | ▼aMachine learning.▲ | |
700 | 1 | ▼aShaikh, Samira.▲ |

Generative AI foundations in Python : discover key techniques and navigate modern challenges in LLMs
자료유형
국외단행본
서명/책임사항
Generative AI foundations in Python : discover key techniques and navigate modern challenges in LLMs / by Carlos Rodriguez ; foreword by Samira Shaikh.
발행사항
Birmingham, UK : Packt Publishing , 2024.
형태사항
xvii, 171 p. : ill. ; 24 cm.
서지주기
Includes bibliographical references and index.
내용주기
Preface -- Part 1 : Foundations of Generative AI and the Evolution of Large Language Models -- Chapter 1 : Understanding Generative AI : An Introduction -- Generative AI -- Distinguishing generative AI from other AI models -- Briefly surveying generative approaches -- Clarifying misconceptions between discriminative and generative paradigms -- Choosing the right paradigm -- Looking back at the evolution of generative AI -- Overview of traditional methods in NLP -- Arrival and evolution of transformer-based models -- Development and impact of GPT-4 -- Looking ahead at risks and implications -- Introducing use cases of generative AI -- The future of generative AI applications -- Summary -- References -- Chapter 2 : Surveying GenAI Types and Modes : An Overview of GANs, Diffusers, and Transformers -- Understanding General Artificial Intelligence (GAI) Types -- distinguishing features of GANs, diffusers, and transformers -- Deconstructing GAI methods -- exploring GANs, diffusers, and transformers -- A closer look at GANs -- A closer look at diffusion models -- A closer look at generative transformers -- Applying GAI models -- image generation using GANs, diffusers, and transformers -- Working with Jupyter Notebook and Google Colab -- Stable diffusion transformer -- Scoring with the CLIP model -- Summary -- References -- Chapter 3 : Tracing the Foundations of Natural Language Processing and the Impact of the Transformer -- Early approaches in NLP -- Advent of neural language models -- Distributed representations -- Transfer Learning -- Advent of NNs in NLP -- The emergence of the Transformer in advanced language models -- Components of the transformer architecture -- Sequence-to-sequence learning -- Evolving language models -- the AR Transformer and its role in GenAI -- Implementing the original Transformer -- Data loading and preparation -- Tokenization -- Data tensorization -- Dataset creation -- Embeddings layer -- Positional encoding -- Multi-head self-attention -- FFN -- Encoder layer -- Encoder -- Decoder layer -- Decoder -- Complete transformer -- Training function -- Translation function -- Main execution -- Summary -- References -- Chapter 4 : Applying Pretrained Generative Models : From Prototype to Production -- Prototyping environments -- Transitioning to production -- Mapping features to production setup -- Setting up a production-ready environment -- Local development setup -- Visual Studio Code -- Project initialization -- Docker setup -- Requirements file -- Application code -- Creating a code repository -- CI/CD setup -- Model selection : choosing the right pretrained generative model -- Meeting project objectives -- Model size and computational complexity -- Benchmarking -- Updating the prototyping environment -- GPU configuration -- Loading pretrained models with LangChain -- Setting up testing data -- Quantitative metrics evaluation -- Alignment with CLIP -- Interpreting outcomes -- Responsible AI considerations -- Addressing and mitigating biases -- Transparency and explainability -- Final deployment -- Testing and monitoring -- Maintenance and reliability -- Summary -- Part 2 : Practical Applications of Generative AI -- Chapter 5 : Fine-Tuning Generative Models for Specific Tasks -- Foundation and relevance : an introduction to fine-tuning -- PEFT -- LoRA -- AdaLoRA -- In-context learning -- Fine-tuning versus in-context learning -- Practice project : Fine-tuning for Q&A using PEFT -- Background regarding question-answering fine-tuning -- Implementation in Python -- Evaluation of results -- Summary -- References -- Chapter 6 : Understanding Domain Adaptation for Large Language Models -- Demystifying domain adaptation : understanding its history and importance -- Practice project : Transfer learning for the finance domain -- Training methodologies for financial domain adaptation -- Evaluation and outcome analysis : the ROUGE metric -- Chapter 7 : Mastering the Fundamentals of Prompt Engineering -- The shift to prompt-based approaches -- Basic prompting : guiding principles, types, and structures -- Guiding principles for model interaction -- Prompt elements and structure -- Elevating prompts : iteration and influencing model behaviors -- LLMs respond to emotional cues -- Effect of personas -- Situational prompting or role-play -- Advanced prompting in action : few-shot learning and prompt chaining -- Practice project : Implementing RAG with LlamaIndex using Python -- Summary -- References -- Chapter 8 : Addressing Ethical Considerations and Charting a Path Toward Trustworthy Generative AI -- Ethical norms and values in the context of generative AI -- Investigating and minimizing bias in generative LLMs and generative image models -- Constrained generation and eliciting trustworthy outcomes -- Constrained generation with fine-tuning -- Constrained generation through prompt engineering -- Understanding jailbreaking and harmful behaviors -- Practice project : Minimizing harmful behaviors with filtering -- Summary -- References.
요약주기
Begin your generative AI journey with Python as you explore large language models, understand responsible generative AI practices, and apply your knowledge to real-world applications through guided tutorials Key Features Gain expertise in prompt engineering, LLM fine-tuning, and domain adaptation Use transformers-based LLMs and diffusion models to implement AI applications Discover strategies to optimize model performance, address ethical considerations, and build trust in AI systems Purchase of the print or Kindle book includes a free PDF eBook Book Description The intricacies and breadth of generative AI (GenAI) and large language models can sometimes eclipse their practical application. It is pivotal to understand the foundational concepts needed to implement generative AI. This guide explains the core concepts behind -of-the-art generative models by combining theory and hands-on application. Generative AI Foundations in Python begins by laying a foundational understanding, presenting the fundamentals of generative LLMs and their historical evolution, while also setting the stage for deeper exploration. You'll also understand how to apply generative LLMs in real-world applications. The book cuts through the complexity and offers actionable guidance on deploying and fine-tuning pre-trained language models with Python. Later, you'll delve into topics such as task-specific fine-tuning, domain adaptation, prompt engineering, quantitative evaluation, and responsible AI, focusing on how to effectively and responsibly use generative LLMs. By the end of this book, you'll be well-versed in applying generative AI capabilities to real-world problems, confidently navigating its enormous potential ethically and responsibly. What you will learn Discover the fundamentals of GenAI and its foundations in NLP Dissect foundational generative architectures including GANs, transformers, and diffusion models Find out how to fine-tune LLMs for specific NLP tasks Understand transfer learning and fine-tuning to facilitate domain adaptation, including fields such as finance Explore prompt engineering, including in-context learning, templatization, and rationalization through chain-of-thought and RAG Implement responsible practices with generative LLMs to minimize bias, toxicity, and other harmful outputs Who this book is for This book is for developers, data scientists, and machine learning engineers embarking on projects driven by generative AI. A general understanding of machine learning and deep learning, as well as some proficiency with Python, is expected. -- Provided by publisher.
주제
ISBN
9781835460825
청구기호
006.3 R696g
소장정보
예도서예약
서서가에없는책 신고
보보존서고신청
캠캠퍼스대출
우우선정리신청
배자료배달신청
문문자발송
출청구기호출력
학소장학술지 원문서비스
등록번호 | 청구기호 | 소장처 | 도서상태 | 반납예정일 | 서비스 |
---|
북토크
자유롭게 책을 읽고
느낀점을 적어주세요
글쓰기
느낀점을 적어주세요
청구기호 브라우징
관련 인기대출 도서