This course will provide you a foundational understanding of machine learning models (logistic regression, multilayer perceptrons, convolutional neural networks, natural language processing, etc.) as well as demonstrate how these models can solve complex problems in a variety of industries, from medical diagnostics to image recognition to text prediction. In addition, we have designed practice exercises that will give you hands-on experience implementing these data science models on data sets. These practice exercises will teach you how to implement machine learning algorithms with PyTorch, open source libraries used by leading tech companies in the machine learning field (e.g., Google, NVIDIA, CocaCola, eBay, Snapchat, Uber and many more).
Learning Outcomes:
- Understand Core ML Concepts – Learn supervised and unsupervised learning, classification, regression, and clustering.
- Implement ML Algorithms – Apply key algorithms like linear regression, decision trees, and neural networks using Python.
- Evaluate Model Performance – Use metrics such as accuracy, precision, recall, and F1-score to assess models.
- Work with Real-World Data – Preprocess, clean, and analyze datasets for effective machine learning applications.
- Apply ML Ethically – Understand bias, fairness, and ethical considerations in machine learning models.
Additional Features:
- No prerequisites required
- Interactive exercises & real-world examples
- Ivy School Certificate Available
- Duke University Certificate (Optional) Additional Cost $60
Duke University Instructors:
Curriculum
- 6 Sections
- 69 Lessons
- 16 Weeks
- Simple Introduction to Machine Learning22
- 1.1Why Machine Learning Is Exciting
- 1.2What Is Machine Learning?
- 1.3Logistic Regression
- 1.4Interpretation of Logistic Regression
- 1.5Motivation for Multilayer Perceptron
- 1.6Multilayer Perceptron Concepts
- 1.7Multilayer Perceptron Math Model
- 1.8Deep Learning
- 1.9Interpretation of Multilayer Perceptron
- 1.10Transfer Learning
- 1.11Model Selection
- 1.12Early History of Neural Networks
- 1.13Hierarchical Structure of Images
- 1.14Convolution Filters
- 1.15Convolutional Neural Network
- 1.16CNN Math Model
- 1.17How the Model Learns
- 1.18Advantages of Hierarchical Features
- 1.19CNN on Real Images
- 1.20Applications in Use and Practice
- 1.21Deep Learning and Transfer Learning
- 1.22Introduction to PyTorch
- Basics of Model Learning6
- Image Analysis with Convolutional Neural Network7
- Recurrent Neural Network for Natural Language Processing13
- 4.1Introduction to the Concept of Word Vectors•9 minutes
- 4.2Words to Vectors
- 4.3Example of Word Embeddings
- 4.4Neural Model of Text
- 4.5The Softmax Function
- 4.6Methods for Learning Model Parameters
- 4.7More Details on How to Learn Model Parameters
- 4.8The Recurrent Neural Network
- 4.9Long Short-Term Memory
- 4.10Long Short-Term Memory Review
- 4.11Use of LSTM for Text Synthesis
- 4.12Simple and Effective Alternative Methods for Neural NLP
- 4.13Natural Language Processing with PyTorch
- The Transformer Network for Natural Language Processing12
- 5.1Word Vectors and Their Interpretation
- 5.2Relationships Between Word Vectors
- 5.3Inner Products Between Word Vectors
- 5.4Intuition Into Meaning of Inner Products of Word Vectors
- 5.5Introduction of Attention Mechanism
- 5.6Queries, Keys, and Values of Attention Network
- 5.7Self-Attention and Positional Encodings
- 5.8Attention-Based Sequence Encoder
- 5.9Coupling the Sequence Encoder and Decoder
- 5.10Cross Attention in the Sequence-to-Sequence Model
- 5.11Multi-Head Attention
- 5.12The Complete Transformer Network
- Introduction to Reinforcement Learning9
- 6.1Introduction to Reinforcement Learning
- 6.2Reinforcement Learning Problem Setup
- 6.3Reinforcement Learning with PyTorch
- 6.4Moving to a Non-Myopic Policy
- 6.5Q Learning
- 6.6Extensions of Q Learning
- 6.7Limitations of Q Learning, and Introduction to Deep Q Learning
- 6.8Deep Q Learning Based on Images
- 6.9Connecting Deep Q Learning with Conventional Q Learning