Okay, I was saying that the document we have is shared. Ok? I am sharing the screen again. So inside that document all the overview is clearly defined as to which work we have to do on which day. Yesterday's session was an inauguration ceremony in which you were told from the perspective of where generative AI is being used. Which jobs are being affected? It was explained how the work was being done. Today's section was technical overview in which we will talk about AI model, model parameters, API's understanding then chat GBT process, this is about high level overview. Then there will be practice sessions for this. Ok? Tuesday to Friday: There will be a question answer session on Friday in which we will practice these things again. Then next week this work will be done in the main session and we will practice the same things again in the practice session. Ok? And so on, we have the complete agenda defined. So you must read out this document once. So let's continue now because we still have some slides. So then we take the question answer from you at the end. Yes ma'am Doctor Taba you are with us. Misbah is with us. Ma'am, if you are with me then I have given you a cost. Kindly unmute yourself. Ok. Assalamu Alaikum everyone. Mi Audible. Yes ma'am your voice is clear and you can share the screen. I'm stopping my screen. Ok. I had joined this session right from the beginning and I have seen that people who have joined are getting very panicky. They may be students, from different companies or from different fields. So you people do not need to worry at all. Today's session is our introduction session. If you have any questions related to deep learning or machine learning, you can ask me at the end of this session. Ok? So I will try to start from the very basics so that even those who do not have a background in engineering or computer systems can get things clear. So, many questions came about Hugging Face, why are we reading Hugging Face ? Why are you using it? What is Hugging Face? So what is Hugging Face basically? You have a website given to you. Where people have already created different applications and different models so that you people do not have to start from zero. If you want to design or create an application, then you should already go and search on Hugging Face. If that application is dead, you can simply download its data set or code and make changes in it. And secondly, in the coming sessions, whatever application you will design yourself, you will also do it through Hugging Face. In this slide, four websites have been told to you. Hugging Face Kaggle Tensor Flow Hub & Model Zoom. But the one we'll be using through this course will be the hugging face. You will be designing your own applications in the coming weeks using Hugging Face. The next trainers will tell you how they will do it, Insha Allah. So you guys do n't need to panic at all. You guys will become so well trained or learn so much that you will be designing your own applications on Hugging Face. We will start everything from the basics. In our next week, Insha Allah, we will start designing applications on Hugging Face and those students or people who are facing difficulties should definitely join the practice sessions. My advice to you is that you must join our main sessions on Sunday and Monday. You must join the remaining Tuesday to Friday or Thursday sessions in the week so that your basics become strong and you have a good understanding in the next sessions because the more we move forward, the more things will become a little difficult. So if you guys are interested then I would recommend that you must join the practice sessions because all your questions will be cleared in that. So whatever we will read today, we will practice all of it in the coming days before going to the next second main session. So first we have Hugging Face. You guys got a little introduction about what is hugging face? You have the tools you use to design your application. The applications already designed by people are also available to you here. Second we have Kaggle. Kaggle, you guys also have Kaggle, what we have is that using Kaggle we download different data sets. You guys have to design any model. You need data sets on which you train the model. So where do those data sets come from? Comes from Kaggle. Similarly we have Tensor Flow Hub. In this you will find the codes of the libraries specifically related to Tensor Flow. And last we had models. You guys are getting my screen share. I had the tabs open so I could give you guys a little introduction. This is what we have on Kaggle's website. If you simply write Kaggle on Google, this website will open for you. You have to click on this. So you have this here at Kagal, if you want you can create an account in it. You can search anything without creating an account. Have you written anything for example deep fake detection. So whatever work is there, it will all be included in it. Deep Fake Detection or Face Detection. So all the people who have worked in this can come. Deep Fake Detection on Images and Videos. Now if you go to this, then this is the code given on it, what will the code that you run do ? This model is trending on images and videos. If you give it any image or video, what will it do? This will tell you whether you have a real image or an AI generated image. Is it original or fake? So in this way, whatever data sets we require, we can do this by using three to four ah websites to train our model. Ok? Let's move on to the next slide. Hopefully, this must have become clear to you all now. Now what is Chat GPT? Now we use Chat GPT in daily life. But have we never thought about how the working behind this happens? If you ask him in Urdu, he answers in Urdu. If you ask in English, he will answer in English. Works on different languages. So how does it work? So this is a basic flow chart given to you in this figure and in this slide. First we have the transformer. What is a transformer basically? The brain is of AI. What should the output be like? On which input ? This always decides the transformer. If you are giving input in English then the transformer will know to reply in English only. If you are giving input in Urdu then what will the transformer say? GPT will tell you to generate the text in Urdu. Now we have these three things defined. Under GPT we have Chat GPT, Dell E and Codex. In Chat GPT we know we give input in text. It gives the output in text. Del E generates the text to image we have. You will write a text in Del E. I need an image like this. This thing should be in it. So what will Dell do? It will generate an image and give it to you. What do we do at Codex? Coax we have text to code. Now, if you require any Python code, then you will tell him in the text that I require such a code which if I give input to, it should give me output like this. For example, you write to give me some edition code. Someone please give me the subtraction. Please give me the code for a simple calculator or I need a code for a scientific calculator. So what will Codex do? I will give you a complete Python code. So under whose umbrella are all these models coming? What is working in GPT and on top of GPT ? Transformer. I am explaining from the very basics so that things become clear to you people. Now what is GPT? GPT stands for Generative Predicted Transformers. Now let us break it down and understand what is generative? Why is it called generative? Because it generates, generates text , generates video, generates slides and generates images. So G stands for generative. P stands for pre trend. Now why was it called pretrained? Why not call it a trend? It is said to be pre-trend only because the model of GPT that we have is already trending. It is already a trend. What does he do whenever you ask him a question? He will give you whatever up to date information you have. So what is the word pre trend basically showing? Let us understand it in this way that whatever model you have, you train it. How do you train ? For example, this example that we just saw of fake images or fake videos. So how will it be trained that first of all we will give it 100 images. In that he will detect what type of eyes should be there. How should the eyes look in the real image? How should the eyes be in a fake image? What shape should the nose have in the real image ? What is in a fake image ? So what is he doing? He is training. He will train on lips, face, forehead and hair. So, what is going on? The train is running. And now what do we do to him before the final run ? That run happens again. He will run all his up to date information and give you the answer. Last is Transformers. What is a transformer? The brain is the main part of your GPT model. The transformer itself determines what kind of output you should have. For example, we can take transformer in this way. What does a transformer do ? He doesn't take your words for granted. He takes the entire line or the paragraph you have given and understands it completely. You normally tell him to act as an instructor, act as a student, act as an interviewer. And then whatever question you have next, you write it. So what will the transformer do? Whatever line or paragraph you have written. He will read it all. He will match the context of one with the other to see in which context this conversation is taking place. In which context do I have to answer? So what is a transformer basically ? It is managing what kind of output should be according to the input you have received ? So in this slide we saw that GPT stands for Generative Predicate Transformer. What is the generative word doing ? Generating text, generating audio, generating slides , generating videos. What is P doing? Pre-trend model. The model which is already trending is being retrained and is giving you the output result. And what is T doing? The transformer is telling in what context the conversation is taking place? Is she being an instructor, is she being a student or have you given her some scenario. It will analyze the entire scenario and generate an output for you. A flow chart was described in this manner. First you gave any input. First is pre-processing. Pre- processing is equivalent to cleaning. Now whatever text you give, what will it do ? It will be pre-processed first. Pre-processing means that it will remove the unnecessary words from it. Is the aa n. Which will have no context. He will clean it. Then the next step will be tokenization. Embedding after tokenization. Let's move on to the next slide. Then we'll come back to this slide. Let's look at tokenization embeddings. Again we will come to this slide. Now what happens to a clean input? Now we know what will happen in clean input? The useless words you have are a then full stop. Now he will remove it and move towards the next step tokenization. Now whatever command you type gives a token to each word. Now here's an example to date comma the cleverest thinker of all time was question mark. Meaning now you have given this in it. Now who has to generate the output in place of the question mark? GPT has to do it. Now how does it work? Let's see this. Here you will go to Google. Will write text to token. This will come to you a tokenize tokenizer. You open this. It is an open platform. You can write any sentence here. For example, we write it. Ali, I will put it down. Ok. Going to office. Now look at this. It assigned a different token to Ali. This is called going to office full stop. Now your question may be whether this has also been done. Why didn't you do the cleaning ? The dot has also been done. then what is it? Right now I am just giving an example of tokenization. I don't do prepping or cleaning. Ok? So, in this, understand that whatever number of words you have, what did it do with it ? Tokens were generated and characters were added. Now different tokens can be generated on different models. Now this is the F model of GPT. This it tokenized the whole one Ali and it separately. Maybe the next model is Ali and it has a full token assigned to it. Ok? But for your understanding, please see that whatever paragraph or word you write , each one gets assigned a token. The concept of tokenization must have become clear to you that whatever word you write, be it a comma, a full stop, a dot, each one will be assigned a token and a total number of tokens will be created. Now what will be the next step? The first step is pre-processing cleaning. Next came tokenization. Third comes to us embedding. Now embedding is an important thing. You have to understand this correctly. What is this? You can see this in embedding to the comma cleverest thinker. So what's going on in this? You have x ma'am for every word. Yes yes yes sir. Ma'am, please put it in slide show mode. Actually it has become quite small. OK sir. Ma'am, there is a slide show button next to the share option. Ok it's done. The show is being shown to you guys on screen. Ma'am, when you click on the slide show, it should have been shared right now, half the part had started getting shared, now it is getting shared, there was a little issue, stop it again, is it clear now? Yes yes. Ok. Now in this slide we can see that you have two different numbers. Numeric values are dates. Now what is embedding? What does the embedding you have do ? It assigns a vector to your words. If you ask its definition then it is numerical representation of words that captures its meaning and relationship with other words. This gives us the definition of embedding that you generate different vectors for each word and then by relating those vectors to each other, we generate the output as to what type of output should be there. The link that I just shared with you is of Text to Tokenizer. If we go down in it, there is also an option of vector as to how the vector is used. Now if we click on Token IDs, you can see that a numeric value of each word has been generated here. So GPT doesn't know what we're saying to it. What is he doing? Assigning a token to each of your words and doing what with that token ? He is designing the embedded vector and generating the output accordingly. Now let's study embedding a little more. It says Vector for King Vector for Man. Well, the vectors which are same or close to each other will have the same vector values. For example, King and Queen will have the same vector value. Man and woman will be their same. Cats and dogs will come under the category of animals. Making apples will be their thing. If the value of cat and dog is 7 or 6 then you have to make apple end that may be one and two. If the King 's is a four, what will the Queen's be? Will it be three or five? So what are these values telling you about what the closest answer should be? When you give an input. So if you ask why are so many vector values of one word coming? So if we look closely, there are many English words which have double meanings. For example, bat. If we look at the bat, we have a bird bat , an animal bat and we also have a cricket bat. So you can have different vector values for the same word. Why? So that by relating the numeric value to the context in which the conversation is taking place, you can know which bat is being talked about. Is this happening to the animal guy or the cricket guy? If it is about cricket then the second value of that bat will be close to the cricket value. If we are talking about animals or birds, then bat will come in the bird category. So an English word can have different vector values. So what do we have now? Whenever we give any input to Chat GPT or any model, such vectors are being generated at its back end. First the tokenization takes place and then the embedding takes place. Through embedding, it calculates what output it has to give or in what context it is being discussed and you get a single output generated. You will see a graph on your right side. So, an example of leap and jump is given in this. Now the meanings of both of these are close. So that's why we are bringing it in the same category. Now it is possible that in the first axis we have, you may get apple and banana. The King and Queen come in the seconds you have. In the third, some animals or birds may come to you. then what is it? Vectors are divided along your axis. And this is what you see as the 2D axis. It can also be 3D. There can be 4D, 5D n number of d axes. You will have as many axes defined as the number of categories you have defined in that model. So it is not necessary that you see only four axes in it. So there should be only four values. Three ah sorry two two D is visible so you just have four values. If we want to divide it further, like we bring the Z axis in 3D. In the same way we can do it in 4D and also in 5D. So it will generate output for us according to the values on which our model is trending. Now what is embedding? The output you are getting in unheading is based on these calculations. Now you have written this text and pressed enter. Now what answer will GPT give after entering ? The calculation will be applied to the entire vector database that you have. Which calculation is being applied now ? That is not your concern. That's the GPT model. It is at his end. What calculations are being done on the back end ? What is not happening? But you guys should be clear about the concept of embedding, the concept of tokenization. Now you have a final vector from which you have these values, what is the probability of the output final D? 8.82 is that D should be the answer. And probably 4.37 John 4.04 So what is he doing in this? It will generate a single word for you based on that specific calculation. Well, the second thing is that you have generated output in the form of for example paragraph from GPT. Now for example, the output is generated in the form of a paragraph and in that, again above every word, you will have this calculation that you have a vector generated. Now what should be the output now? What should be the word now? There are 10 words in the paragraph. So this calculation will be run 10 times. 10 times again the vector will be generated, relating what the next word should be. Ok? So this output that we are getting is not so easy. There are very complex calculations happening at the back end. And behind every word there is again a model train telling you what your next word should be. That is why you get an accurate output whenever you ask any question to GPT and if you do not ask a clear question then GPT itself asks whether your question means this and then it generates its complete context and shows you an output. Now we have read that in clean inputs this is what happens that aa di n is such words get removed from us. As Miss Hafsa told you, whenever you travel anywhere, you remove the extra luggage so that you do not have to pay. Similarly, whenever the calculations I told you are being performed, the more extra words there are, the more time consuming it will be. The operation will be complex. So we try to generate as few words as possible so that by generating token IDs we can easily get our desired output. Next is steaming. What steaming will be is that whatever word you give or whatever you write. Now this N D has already been removed in the first step. What happens in steaming is that if you have Vox, it will remove the S. I will make him walk. What happened with this? The characters got reduced. The tokens remained the same but the number of characters reduced. So our model will also be affected by that. What happened to the retrieval? It has been retrieved. Running is done. Walking became walking. So what's in it ? We remove the extra words that have the same meaning so that the model we have can be easily trained. Well, Steaming is saying in this slide that we don't always have work. Why does n't it work? For example, if you have the word University, then by applying this streaming it has become Universe. So its meaning itself has changed. Or if you have an agreement, what did he do ? I agreed to that. So again the meaning changed. So whenever streaming is applied, we have some limitations on it too. Now how can we remove or correct those limitations? So what happens in this? We have a dictionary to use. Ok? In this we have dictionary use. So, whoever is removing the words from the dictionary, he will see that the meaning of both run and running is the same, so what he will do is he will shorten it or remove the extra words or clean it. So what will he do if both the words have the same meaning in the dictionary ? He will remove it. Otherwise he will let it be as it is. So here an example of a bat is also given. As I told you guys that you can get bats in animals and cricket bats as well. So what will happen in this? If such a case arises, what can happen is that you may have chances that wrong output may be generated. Chat GPT doesn't always give you the correct output. This might have happened to you also that Samtys GPT might have given you some wrong answers. So it is not necessary that this works 100%. Sometimes it may happen that you may get some results which are not related. From the question you have asked, what happens is that the vector values you have assigned to your text do not match. So what happens in that? Then whether you request again or not, you tell GPT in more detail that what I requested from you or what I meant was this, then what is that again, then it generates and gives you your desired output related to that context. Well, different limiting tools might produce inconsistent results. What does it mean? This means that the tools being used on the back end should be the same. If you require accurate output. What will the different tools do? Will generate different outputs. If you are asking it in English, it will generate English words in vector. If you are asking in Urdu, I will do it in Urdu. If you use both the same , mix both, then there are chances that you may get some unrelated output generated. What is Reason? The reason is that he is looking at the words to see whether it is an English word or an Urdu word. According to that, they are generating its vector and showing the output result to you people. So hopefully it will be clear what the output that GPT generates is ? Cleans it first. Then embed then tokenization happens to you. Each word is assigned a token. After that, embedding takes place. What does embedding consist of ? You have calculations on different values which are in the form of vectors. And what output do you get ? Whatever is the closest value of the vector, it comes to you in the output. And if the output is coming in the form of a paragraph, then this embedding is applied to every word. And by calculating the relationship of the previous vector with the next vector, you get the next vector generated. And the meaning of this is its embedding that at this point it means that the value of 6.6 is this. Six is 0.8. So whatever output value you are getting, the word against it gets generated. Now the next slide is What is Rag? We will read Rag in detail in the next sessions. But since today's slide is an introduction slide, let me give you a little introduction. Now Rag stands for Retival Augmented Generation. Its full form is Retrieval Augmented Generation. You have to remember this. You may be asked this in interviews. Can come to the queue. So you have to remember that rack stands for retrieval augmented generation. Now what is this? It combines retrieval systems with generative AI models to produce accurate and relevant responses. Now what was there before? Chat When GPT started, the model train you had of GPT was only till 202. Whatever question you asked him, he would reply accordingly. It was not up to date and the changes that had been made in it had not yet been updated on the website. But what about today? Today the rack system is implemented in it. The way it works is that whatever question you ask it, it gives it up to date information by searching on websites. So what Rag does to us is it will give you the most up to date information that we have. Now if GPT is used, how does GPT work for us? For example, you have given him internet websites. He has memorized all the information in Wikipedia. So now you ask him any question. If he has that data in his records then he looks from there and answers you. If not, what will he do to the Again website ? Will read, train and bring you up to date information. Now how does rack work? How does the rack work? The user generates a queue. What happens to you now at Curie? There is a document. Its chunks are formed. Goes to the vector database. Tokenization happens the way we just read. Vectors are formed. Embedding occurs. What is a donation? You don't have direct output generated. What additional information did you get in this? Augmented. What happened to augmented? Relevance Con Contacts. Now someone will search related to the query you have generated. What are the relevant references you have or if you have any updated information, then he will generate the information accordingly and give it to you. What was there before was not a box of augmented. So what was in it? It simply tokenizes the document, vectorizes it, and generates the output. But what has happened to Augmented? Now whatever up to date information you have, it will give you it by searching through websites, it will convert it into large language model so that it can be easily understood and will generate a response to the user. So this is how Rag works. Ok. Till now our today's session was over and sir takes the questions and answers. Ok. Questions are answered one by one. So let me just add one thing. Right now you guys have read a little bit of rack. Then I have read about stemming, lemmatization etc. So we have read these things just at the basic level. We will have a separate session on complete drag in which we will tell you how to create drag application in detail. Ok? This is an overview of how things are going. Ok? This much is clear only for understanding and for the quiz. But how is its practical application made ? How is it deployed? How does the work on it happen, all the things that we will do on this, so let's take questions and answers, yes Mohammad Jamal, I unmuted you, you can answer questions, Salam, Walekum, Walekum, Salam, are bending and unbending matrices always systematic, can we share their weights during training so that the size of the model becomes smaller, yes sir, please answer this question. Ok? Mainly we have this embedding model and unembedding model. Ok? It depends on your model. Ok? One is that you are building your model complete from scratch. Ok? Which generates the embedding and gives it to you. Ok ? And its architecture depends on you. At that time you want to reduce or increase the weight, whatever you are doing. Ok? But if you are using someone's built-in model for the same thing, for example, if you use Open Air's embedding model, then you will have to use the weights that it is giving you after creating them. Ok? This is one thing. You have to see clearly inside this that Jamal Sir was not alone. Yes yes, okay. And secondly let me tell you that embedding takes words into high dimension space. But how does the embedding layer extract the probability of thousands of words back from that space ? Well, mainly all the magic of the transformer is happening on the back end. Ok. We're not going inside its core architecture, bro. Meaning how the probability is being calculated through numerical calculation. All that statistics work is going on at the back end. The subject of Maths. Ok? So he goes towards core maths only. We are not going towards that. Right now here we are just understanding at the basic level. Ok? Ok. Ok. And I have another question. Yes yes, okay. There is one more question. After that Sir, I have not heard that there is a semantic gap between embedding and unembedding matrices. So, during training, the model itself bridges this gap or we have to use some special technique. That's what I'm talking about. What we have, okay ma'am, if you write here, Attention is as you need paper, we have, how does this architecture work in it, I mean, we have a complete transformer mechanism, which you talked about how the context is understood, okay, all those things are defined in it, I think your screen is stuck, okay, I will share it, let me share the screen, tell me, is my screen being shared? Is my voice clear? Sir the voice is clear. Your screen is not sharing. I will do it. Ok. You told me this. Ok. Ah this is the na mainly paper that we have which is the base paper for the ah complete responsive chat process of GPT. Ok? Or whatever LLM work you have going on. Ok ? Whatever concepts we have in this, not how the actual concept is taken. Sometimes you also use wrong spellings. But still the model understands it. How does he do it? All this is numerical calculation in the back end. This architecture that you are seeing on its back end, if you are from the computer science background then it is mandatory to read this paper. You will get a clear idea of how it happens. Ok. Yes yes, okay. Thank you so much sir. Ok. Hafiz Mohammad Anwar. Assalam Walekum Rahmatullah Barakatahu. Greetings to you too. Yes sir. Yes, Mashalla, these are good lectures. My own field is computer science. Data science is related to these things. So it is natural that I am also an associate professor. So I have taken the course only for my improvement, for betterment. So, these are some of the questions that students have, it is natural that this back end is not their background due to which this stemming or this limitation, all these concepts are not clear. Well, my question here was just about these courses because yesterday I had raised my hand but my question could not be asked. You should also organize such workshops and courses for teachers because if these children who are studying in universities, if the teachers have command over these courses, these subjects, these ideas, knowledge, then these children can be trained better from there and when the concepts of the teachers themselves are not clear, then it is natural that these poor children take degrees from the university but do not acquire skills and due to lack of skills, they become jobless or face problems. So my only point here is that if you are getting these courses done. You guys are getting this done. So especially these BS programs which have started in colleges, some learning still takes place while staying in the university. I am also a PhD scholar. So that is some learning. But when we talk about college level where BS programs have started, I see that there is a need for a lot of training of teachers. So, if you kindly pay attention to this aspect, it will be very beneficial for Pakistan, for the teachers of Pakistan and also for the students. Sir, thank you so much. You added this point. But I want to tell you one addition that I myself am a teacher and recently Aspire Pakistan has initiated a workshop in our university. The link has been generated recently. So, this work is being done for teachers as well as for students and these workshops or seminars, trainings that you have are open. The teacher can also attend. Students can also attend. Normal students are not from CS background. There are pharmaceutical companies, they can also join. But this thing has been initiated for teachers also. This has started in different universities. I was saying the same thing that PhDs are being done in universities even before. PhD doctors are present. Different subjects are available for the students who are doing BS in colleges. Especially in Punjab, you know that BS programs have been started in almost 60% of the colleges. But BS programs have been started. Their teachers have not been trained. He has no training. Even the teachers do not have proper knowledge of assignments, their presentation and credit arts. So when teachers themselves do not have the right knowledge, they will face problems in communicating with the children also. Children will also face problems in transferring knowledge. So okay, this is yours. Well, this is yours. I will not look at this, I also saw this when I saw the link, only students were considered on the link because I was a PhD scholar, so I said I am also a student, so I applied. Ok? So if you launch this for college level teachers also, then Insha Allah it will be beneficial for you also and it is simple that it will be beneficial for our nation. Thank you. Yes sir, this over is open for everyone. In the last cohort, some students who were with us from 8th grade also meant that they have tried to create the applications and above the basic level, they were also able to create all the applications. Ok? Meaning he had created the overall idea. Even though he did not have very deep knowledge about how the work was going on. But they are able to create the application and they are from eth grade actually. So let's take the next question. So Dishan Ali ok Syed Sahil salam walekum. Greetings to you too. Hello Assalamu Alaikum Sir Zeeshan Ali, please complete the question first Sir, I will ask you a question Sir, if we have to create private data of any company on its chair and according to fine tuning, it will be better to use the drug and why Sir, it will be better, well if I say this, you should ask this question to AI once and after that, then ask me again. Instead of just I say it first then you search. What do you say? Thank you sir. Otherwise, you will get the answer almost before the rack session. Syed Sahil Ali ji, I have unmuted you Sahil, you can ask questions, Assalamu Alaikum Sir, Assalamu Alaikum, well, it is like I am currently a student of Software Engineering class semester, so we ourselves are using APIs, like Gemini Chat, GBT and Flights APIs, which we use, we take them from other places, so it is not possible for us to create the same API ourselves because nowadays it is the era of AI, so everything is possible, so it is not possible for us to create the same API ourselves and use it. Well, everything that is there, you would have been doing it till the end. Ok? Meaning, let's assume you already have a complete process running. It is made and they are more stabilized than you. Then you are just increasing the investment you have. Ok? Is that thing okay? If you think from this perspective, your dependency is ending. Meaning, in future let's suppose if they get shut down then your work will continue because you have deployed it. Ok? That is a half perspective. But if they are giving you a standard complete one in which they are saying that everything will remain open, then in that case much battery, you should use it. Ok? Now it is like this that we use SPM, we follow MDA or model architecture. So this ready-made app which we already have is available for payment because now when we buy the Chat GBT API, first of all we get a month free at the beginning. There are some requests in that too. So when we buy that pad after every month, then to avoid the same pad, if we make it for ourselves then it is better in my opinion. What do you say ? Well, we will show you the same things that you were using there for free. Ok? Meaning, we will use the same concept, the same thing for free. Ok? Correct. Ok? Right now it is like this that we use Rapid API which is an application through which we take the data API of multiple flights destinations etc. but the same is available free there also but that too is available free for one month, after that it will be free, it will be totally free, it will be free, thank you sir, thank you ok Maan Noor Fatima Assalam Walekum Sir Walekum Salam. I understood everything in the lecture so I have no questions. Thank you so much. Mohammad Jain Farid Jain Farid Sir my question is that in reg if two documents have similar vector but different meaning how does the murder decide which one to use? Okay, that is the question where you have to trade off, it depends on you whether you want a completely perfect response or you want a response that is a little passable, even with 90% accuracy, okay, then you have to trade off on the latency, so these are some of the things, now if you want a very good response, let's say, then chat, GPT etc. are quite good. Ok? Gives response. Gemini gives you. Ok? But if you say no, I can manage by doing some work and finding tunes. Then Lama gives it to you and that too free of cost. Ok? So after looking at these few things we have to take an overall decision as to what our actual requirement is. At the end Sir, I had another question that whether we will have any special class regarding Git and Git Hub or we are leaving that aside, we will not specifically go towards Git Hub, we will look at its alternative which we have just talked about Hug Face today. Actually it also works in the same way. Sir, from here it means we can use open source model. Like sir, everyone's main issue is about API keys because we have to pay for it. So if we tell everyone this process of how to get free API, like from Get Up or Hugging Face, then hopefully everyone's headache will be solved. I will do something. Yes, there is a time for everything. Each one is a specific lecture. Ok? This is our main session cover for this week. Now we are not reading anything new in the next few days. We have started practicing these things again. Next weekend will be the main session again. Ok. There is a new feature in it, so each session has a specific time. So we will do this, Insha Allah. Thank you sir. Thank you. Ok. Hafsa Khan. We have last 5 to 6 minutes as the session is already scheduled at 10 PM. Hafsa Khan Yash Mohammad Yash Ji Yan if you are speaking then the sound is not coming from the front. Ok saar ok Tayyab Sheikh ji assalamu alaikum. Greetings to you too. Yes, sorry, I did not get a chance to ask the question earlier. But you can explain this a little bit again. I did not understand this concept of service. Well, it is simply JOSA, right? What does LLM mean ? Large Languages and Models. Large means very big, there are languages within the language, there is English, Urdu, Hindi, Persian, whatever languages we have, they are of every type. Now what is a model? Model is a logic, it is a concept. Now why is it written after collecting it that we have such a technology or we have such a system which has absorbed the knowledge from all over the world, okay? Meaning there are thousands of books inside it. There are millions of books lying all over the world. The content you see on different websites is memorized by him. Meaning he has understood it. That is the reason why it is called Large Language Model. It is also available in different languages. It's too much. And model because it is logic, it is a concept. Yes. Ok? So basically, as I have written in this that accessing LLM through API, I understood the concept of API but then LLM means approaching it in different language or something else, that is, you contact the service through API, so it means that after using API you tell it that I want it in this language, then this is the meaning or because you said that the definition of multiple languages, it is not necessary that we are contacting only the language. Actually, if we want to create any contact with any other system, then we will create and use API. Okay all right. Thank you sir. Ok. So let's take the next question. We have last two minutes, Jeera ji Ali, Hello, Assalamu Alaikum ji, Asaar ji, Assalamu Alaikum sir, my question is that we see this word at many places, Gateway, API Gateway, what is the meaning of this gateway which is used everywhere, now mainly we have right now, right now let us talk about it, is the API clear to you, I mean sir, absolutely sir, now the gateway that we have, do you know what is meant by gate, I mean, the word gate is a door, that means you understand this, gate is a door, there must be some way or the other. Yes. Ok. Now what we have is that through that path we can access multiple different services. Ok. Now that gateway can tell you which different services you are accessing with one API key. Ok? With the help of a single gateway. That means you can say that this is the main door of our house. We are able to assess multiple rooms from the main door of the house. And every single room is one of our services. Ok. We have a server room. clear? Ok. Ok. Thank you sir. Ok. So, Inshallah, we will meet him again tomorrow. Actually our time is up. Our session will auto end. So we will practice again tomorrow and our main agenda for tomorrow is that we will see examples and applications of AI model and then we will see what are the parameters of AI, how does it see the input and how can we measure the performance requirements from them. So, inshallah, see you tomorrow. Thank you very much for joining.
HEC Generative AI Training Program | C3 | Week 1 | Main Session 1 | Monday | Part-2 Topic: Introduction to Generative AI