Artificial Intelligence (AI) programming focuses on creating algorithms and models that enable machines to mimic human intelligence. This involves a range of techniques, from basic rule-based systems to advanced machine learning and deep learning methods. Machine learning, a subset of AI, allows systems to learn and improve from experience without being explicitly programmed. This is achieved through various algorithms such as supervised learning, unsupervised learning, and reinforcement learning, each suited to different types of tasks and data.
Deep learning, a specialized branch of machine learning, uses neural networks with many layers to analyze complex data patterns. It has been particularly successful in fields like computer vision, natural language processing, and speech recognition. For instance, convolutional neural networks (CNNs) excel at image and video analysis, while recurrent neural networks (RNNs) and transformers are widely used for processing sequential data, such as text and speech. These models require significant computational power and large datasets, often leveraging GPUs and TPUs for efficient training and inference.
AI programming also involves integrating these intelligent models into real-world applications. This includes developing interfaces, ensuring system scalability, and maintaining data privacy and security. Applications of AI span various industries, from healthcare and finance to autonomous vehicles and personalized marketing. Ethical considerations are paramount in AI development, addressing issues such as bias, transparency, and accountability to ensure that AI systems are fair and beneficial to society. Thus, AI programming is a multifaceted discipline that combines technical expertise with ethical responsibility to create intelligent, impactful solutions.
Scheme of Study
CAI 101 – Application of Artificial Intelligence
CAI 201 – Lab – Application of Artificial Intelligence
Sl.No |
Contents |
Theory (Hrs) |
Practical (Hrs) |
Total (Hrs) |
CAI 101 |
Application of Artificial Intelligence |
20 |
– |
20 |
CAI 201 |
Lab – Application of Artificial Intelligence |
– |
80 |
80 |
|
Total |
20 |
80 |
100 |
CAI 101 – Application of Artificial Intelligence (Hours-20) Theory 20Hrs
Module – I
Introduction to Python—Basics–Python Data Structures–Functions using Python—Numpy—Scipy–Supervised Learning Techniques:–Decision Trees–Decision Tree Induction–ID3–Attribute Selection–Measures–Evaluation of classifiers-Dealing categorical data–Decision Tree using sklearn–Naïve Bayes Classifier–Naïve Bayes Prediction–Naïve Bayes for Climate data–Dealing
dimensionality–Naïve Bayes using sklearn–Support Vector Machines–Linear Classifiers–Margin of SVM’s–SVM optimization–SVM for Data which is not linear separable–Learning non-linear patterns–Kernel Trick–SVM Parameter
Tuning–Handling class imbalance in SVM’s–SVM’s pros and cons and summary–Linear SVM using Python–SVMwith RBF kernel with Python–Un Supervised Learning Techniques:–K meansclustering—Algorithm–Initialization Methods–Dealing distributeddata–Dealing noisy data–K Means using Python–K Means using sklearn–Hierarchical clustering–Types and complexity–Cluster Dissimilarity–Linkage Criteria–Dendogram Generation–Clustering using sklearn–CASE STUDY–Machine Learning for Healthcare–Machine Learning for Education–Mini PROJECT–Classification on Iris dataset
Module – II
Intermediate Python–Expression evaluation using Python–Regular Expressions—Visualization–Feature Selection —What is feature selection? Why feature
selection?–Feature selection vs feature extraction–Feature subset selection–Wrapper Methods–Feature generation in bag of words–Dimensionality Reduction —Principal Component Analysis—Standardization–Covariance Estimation–Eigen
Determination–Accuracy Improvement–Python Code: PCA on Iris dataset–Un Supervised Learning
Techniques:–Regression Analysis–Univariate
Regression–Multivariate Regression–Logistic Regression–Logistic Regression for Multi-class
Classification–Regularization and Parametric
Learning–Python : Regression Analysis in Stock Market Prediction–Model based selection techniques–Hidden Markov
Model–Dishonest Casino Example
of an HMM–Three Problems of an HMM–The
Forward Algorithm–The Backward
Algorithm and the Posterior Decoding–The Learning Problem of an HMM, The
Baum Welch Algorithm–Python code: Creating
a simple Gaussian
HMM–Evaluation of classifiers–Machine Learning
Process–Qualities of a Classifier–Technical Practical Issues in
ML–Non-Technical Practical Issues in ML–Supervised
Learning Techniques:–BIRCH–Problem with previous methods–Advantages with BIRCH—Algorithm–Calculations with the
Cluster F e a t u r e s — D B S C A N — H i s t o r y — A l g o r i t h m — C o m p l e x i t y – – D e a l i n g Noise—Advantages—Disadvantages–Parameter estimation—Extensions–DBSCAN using python–DBSCAN
using sklearn–CASE STUDY–Machine Learning for
Transportation and self-driving cars–Machine Learning for
Banking Domain–Mini PROJECT–Clustering:
Image Segmentation–Clustering: Movie Recommendation
Module – III
Intermediate Python–Expression evaluation using Python–Regular Expressions—Visualization–Cross validation and re-sampling methods–K-fold cross validation–Boot strapping–Measuring classifier performance–ROC curves–Enhancing
Accuracy–Combining multiple learners–Model combination schemes–Voting, Bagging,
Boosting–Bootstrap Aggregation–Neural Network – History of Artificial Neural Networks
–Models of Neuron–Activation functions—Architectures–Perceptron learning–Learning weights
of a neuron–Error surface–Algorithm & Applications–Decision Boundary for
a single Neuron–Learning Non-Linear
Patterns–Softmax and Binary/Multi-class cross entropy loss–MP neuron–Hebb net–MP neuron using
Python–Hebb net using Python–Python Code: MLP for Hand-written digit recognition with two hidden layers–Forward Propogation Algorithm–Backward propogation Algorithm–Forward Propogation Algorithm using python–Backward propogation Algorithm using
python.
Module – IV
Introdcution to Deep Learning–Deep Neural Networks–Convolutional Neural Networks–Introduction to Tensor flow–Basic classification–Text classification—Regression–Overfitting
and underfitting–Introduction to Google
Colab–GPU Processing–Processing IRIS Data set–Introduction to AWS Sage Maker—Recommendation
Engines–Best Practices in ML–CASE STUDY–Mac Machine Learning for Supply
Chain Management–Machine Learning
for Banking Domain
–PROJECT