<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Deviloper's Blog]]></title><description><![CDATA[Technical Blog related to Tips & Tricks, Tutorials, programming languages Like Python, javascript, etc. 
#pythondeveloper #datastructure #appdeveloper #javaprogramming #developerlife]]></description><link>https://deviloper.in</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 00:44:15 GMT</lastBuildDate><atom:link href="https://deviloper.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Understanding Large Language Models: What You Need to Know]]></title><description><![CDATA[Post #2 in the Complete Prompt Engineering Series
Welcome back! In What is Prompt Engineering? A Complete Introduction, you learned what prompt engineering is and why it matters. Now we're going deeper: understanding the engine under the hood. You do...]]></description><link>https://deviloper.in/understanding-large-language-models</link><guid isPermaLink="true">https://deviloper.in/understanding-large-language-models</guid><category><![CDATA[llm]]></category><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[#PromptEngineering]]></category><category><![CDATA[Machine Learning]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Fri, 19 Dec 2025 13:32:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766150909743/5147a974-01aa-40f8-97ba-d04b15458b74.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>Post #2 in the Complete Prompt Engineering Series</em></p>
<p><em>Welcome back! In</em> <a target="_blank" href="https://deviloper.in/what-is-prompt-engineering-complete-guide"><strong>What is Prompt Engineering? A Complete Introduction</strong></a><em>,</em> <em>you learned what prompt engineering is and why it matters. Now we're going deeper: understanding the engine under the hood. You don't need to become a machine learning engineer, but understanding how LLMs work will transform how you prompt them.</em></p>
<h2 id="heading-why-understanding-llms-makes-you-a-better-prompt-engineer"><strong>Why Understanding LLMs Makes You a Better Prompt Engineer</strong></h2>
<p>Here's a question: Would you be a better driver if you understood how an engine works? Maybe, maybe not—but you'd definitely be better at diagnosing problems and maximizing performance.</p>
<p><strong>The same applies to prompt engineering.</strong></p>
<p>When you understand:</p>
<ul>
<li><p>Why <strong>temperature</strong> settings change output personality</p>
</li>
<li><p>How <strong>tokens</strong> affect your costs and results</p>
</li>
<li><p>Why <strong>context windows</strong> are your biggest constraint</p>
</li>
<li><p>What makes <strong>different models</strong> excel at different tasks</p>
</li>
</ul>
<p>...you stop guessing and start engineering.</p>
<p><strong>This post will give you that X-ray vision.</strong> By the end, when an AI gives you an unexpected response, you'll know exactly why—and how to fix it.</p>
<h2 id="heading-part-1-how-large-language-models-actually-work-the-simplified-truth"><strong>Part 1: How Large Language Models Actually Work (The Simplified Truth)</strong></h2>
<h3 id="heading-the-core-concept-statistical-prediction-at-scale"><strong>The Core Concept: Statistical Prediction at Scale</strong></h3>
<p>Let's start with a truth that changes everything:</p>
<p><strong>Large Language Models don't "understand" language. They predict it.</strong></p>
<p>Imagine you're playing a game where I say: <em>"The cat sat on the..."</em></p>
<p>Your brain instantly suggests: <em>mat, chair, windowsill, table</em>—all reasonable completions. That's essentially what an LLM does, except:</p>
<ol>
<li><p>It analyzes <strong>trillions</strong> of word patterns from its training data</p>
</li>
<li><p>It calculates the <strong>probability</strong> of each possible next word</p>
</li>
<li><p>It selects based on those probabilities (modified by parameters we'll discuss)</p>
</li>
<li><p>It repeats this process <strong>word by word</strong> until the response is complete</p>
</li>
</ol>
<p><strong>Key insight:</strong> Every word you see from an AI is a prediction based on:</p>
<ul>
<li><p>The patterns it learned during training</p>
</li>
<li><p>Your prompt (the context you provided)</p>
</li>
<li><p>The parameters you've set (temperature, etc.)</p>
</li>
</ul>
<h3 id="heading-the-transformer-architecture-the-breakthrough-that-changed-everything"><strong>The Transformer Architecture: The Breakthrough That Changed Everything</strong></h3>
<p>In 2017, Google researchers published a paper called <strong>"Attention is All You Need"</strong> that introduced the Transformer architecture. This was the earthquake that created the AI revolution we're experiencing.</p>
<p><strong>What made it revolutionary?</strong></p>
<h4 id="heading-before-transformers-sequential-processing"><strong>Before Transformers: Sequential Processing</strong></h4>
<p>Older models (RNNs, LSTMs) had to process text sequentially—one word at a time, left to right. This meant:</p>
<ul>
<li><p>❌ Slow processing</p>
</li>
<li><p>❌ Limited context understanding</p>
</li>
<li><p>❌ Difficulty with long-range dependencies</p>
</li>
<li><p>❌ Couldn't parallelize (use multiple processors simultaneously)</p>
</li>
</ul>
<p><strong>Example problem:</strong><br />In the sentence <em>"The chef who trained in France and Italy and worked at multiple Michelin-starred restaurants opened</em> <strong><em>his</em></strong> <em>new restaurant,"</em> the word "his" refers back to "chef"—but there are 15 words in between. Old models struggled with these connections.</p>
<h4 id="heading-the-transformer-revolution-attention-mechanism"><strong>The Transformer Revolution: Attention Mechanism</strong></h4>
<p>Transformers introduced <strong>self-attention</strong>, which allows the model to:</p>
<ul>
<li><p>✅ Look at <strong>all words simultaneously</strong></p>
</li>
<li><p>✅ Understand <strong>relationships between any words</strong> in the input</p>
</li>
<li><p>✅ Process in <strong>parallel</strong> (much faster)</p>
</li>
<li><p>✅ Handle <strong>long-range dependencies</strong> effectively</p>
</li>
</ul>
<p><strong>The "Attention" mechanism</strong> answers: "When processing this word, which other words in the sequence should I pay attention to?"</p>
<p><strong>Visual analogy:</strong></p>
<ul>
<li><p><strong>Old way:</strong> Reading a book by looking at one word at a time through a tiny hole</p>
</li>
<li><p><strong>Transformer way:</strong> Seeing the entire page at once and understanding how every word relates to every other word</p>
</li>
</ul>
<h3 id="heading-the-three-components-of-a-transformer"><strong>The Three Components of a Transformer</strong></h3>
<p><strong>1. Input Embeddings</strong></p>
<ul>
<li><p>Your text is converted into numerical vectors (we'll cover this in tokenization)</p>
</li>
<li><p>These vectors capture semantic meaning: <em>"king" - "man" + "woman" ≈ "queen"</em></p>
</li>
</ul>
<p><strong>2. Encoder-Decoder Architecture (or Decoder-Only)</strong></p>
<ul>
<li><p><strong>Encoder:</strong> Processes the input and creates a rich representation</p>
</li>
<li><p><strong>Decoder:</strong> Generates the output based on that representation</p>
</li>
<li><p><strong>Modern LLMs</strong> (GPT series, LLaMA) use <strong>decoder-only</strong> architecture for generation tasks</p>
</li>
</ul>
<p><strong>3. Attention Layers (The Magic)</strong></p>
<ul>
<li><p>Multiple layers of attention mechanisms</p>
</li>
<li><p>Each layer learns different patterns: grammar, facts, reasoning, style, etc.</p>
</li>
<li><p><strong>GPT-4</strong> has 120+ layers; <strong>Claude 3.5</strong> has similar complexity</p>
</li>
</ul>
<h3 id="heading-training-how-models-learn"><strong>Training: How Models Learn</strong></h3>
<p><strong>Phase 1: Pre-training (The Foundation)</strong></p>
<p>The model reads massive amounts of text from the internet and learns to predict the next word.</p>
<p><strong>Scale we're talking about:</strong></p>
<ul>
<li><p><strong>GPT-3:</strong> Trained on ~45TB of text (300 billion tokens)</p>
</li>
<li><p><strong>GPT-4:</strong> Estimated 10-20 trillion tokens</p>
</li>
<li><p><strong>LLaMA 2:</strong> 2 trillion tokens</p>
</li>
<li><p><strong>Claude 3:</strong> Estimated similar or greater scale</p>
</li>
</ul>
<p><strong>What it learns:</strong></p>
<ul>
<li><p>Language patterns and grammar</p>
</li>
<li><p>Factual knowledge (though imperfectly)</p>
</li>
<li><p>Reasoning patterns</p>
</li>
<li><p>Common sense associations</p>
</li>
<li><p>Cultural and domain knowledge</p>
</li>
<li><p>Unfortunately, also biases present in training data</p>
</li>
</ul>
<p><strong>Training objective:</strong> Given this sequence of words, predict the next one. Repeat billions of times across trillions of words.</p>
<p><strong>Phase 2: Fine-Tuning (Specialization)</strong></p>
<p>After pre-training, models undergo additional training:</p>
<p><strong>a) Supervised Fine-Tuning (SFT)</strong></p>
<ul>
<li><p>Human-written examples of good responses</p>
</li>
<li><p>Teaches the model to follow instructions</p>
</li>
<li><p><em>"When someone asks X, respond like Y"</em></p>
</li>
</ul>
<p><strong>b) Reinforcement Learning from Human Feedback (RLHF)</strong></p>
<ul>
<li><p>Humans rank multiple model outputs</p>
</li>
<li><p>Model learns which responses humans prefer</p>
</li>
<li><p>This is why <strong>ChatGPT</strong> and <strong>Claude</strong> are so much better at conversation than raw GPT-3</p>
</li>
</ul>
<p><strong>The result:</strong> A model that can follow complex instructions, maintain context, and generate human-like text.</p>
<h3 id="heading-parameters-the-models-memory"><strong>Parameters: The Model's "Memory"</strong></h3>
<p>You've probably heard models described by their parameter count: "GPT-4 has 1.7 trillion parameters!"</p>
<p><strong>What are parameters?</strong><br />Think of them as the model's "knowledge weights"—the numerical values that determine how the model processes and generates text.</p>
<p><strong>Model sizes:</strong></p>
<ul>
<li><p><strong>GPT-3:</strong> 175 billion parameters</p>
</li>
<li><p><strong>GPT-4:</strong> ~1.7 trillion parameters (estimated, 8 expert mixture)</p>
</li>
<li><p><strong>Claude 3 Opus:</strong> Estimated ~500B-1T parameters</p>
</li>
<li><p><strong>LLaMA 2 70B:</strong> 70 billion parameters</p>
</li>
<li><p><strong>Gemini Ultra:</strong> Estimated 1.5T+ parameters</p>
</li>
</ul>
<p><strong>Bigger = better? Usually, but...</strong></p>
<ul>
<li><p>✅ More parameters = more knowledge capacity</p>
</li>
<li><p>✅ Better reasoning and nuance</p>
</li>
<li><p>❌ Much more expensive to run</p>
</li>
<li><p>❌ Slower response times</p>
</li>
<li><p>❌ Not always necessary for simpler tasks</p>
</li>
</ul>
<p><strong>Practical implication:</strong> A 70B open-source model might be perfect for your use case, saving you 90% in costs compared to GPT-4.</p>
<h3 id="heading-why-this-matters-for-prompting"><strong>Why This Matters for Prompting</strong></h3>
<p>Understanding this architecture explains:</p>
<p><strong>1. Why context order matters</strong><br />The model processes your prompt sequentially (even though attention is parallel). Information later in your prompt gets more "attention."</p>
<p><strong>Practical tip:</strong> Put the most important instructions near the end of your prompt.</p>
<p><strong>2. Why the model can be confident yet wrong</strong><br />It's predicting based on patterns, not retrieving facts from a database. High probability ≠ factually correct.</p>
<p><strong>Practical tip:</strong> Request citations, use chain-of-thought prompting, verify critical information.</p>
<p><strong>3. Why examples are so powerful</strong><br />Examples directly influence the probability distribution for the next tokens.</p>
<p><strong>Practical tip:</strong> Few-shot prompting works because you're literally showing the model the pattern you want.</p>
<p><strong>4. Why some tasks are harder than others</strong><br />Complex reasoning requires the model to maintain and manipulate abstract representations across many steps.</p>
<p><strong>Practical tip:</strong> Break complex tasks into smaller steps (prompt chaining).</p>
<h2 id="heading-part-2-tokenizationthe-hidden-language-of-ai"><strong>Part 2: Tokenization—The Hidden Language of AI</strong></h2>
<p>Here's something that will change how you write prompts forever: <strong>AI doesn't see words. It sees tokens.</strong></p>
<h3 id="heading-what-are-tokens"><strong>What Are Tokens?</strong></h3>
<p><strong>Tokens are the basic units of text that models process.</strong> They're not exactly words, not exactly characters—they're subword units.</p>
<p><strong>The rough rule:</strong> 1 token ≈ 4 characters or ≈ 0.75 words in English</p>
<p><strong>Examples:</strong></p>
<pre><code class="lang-sql">Unknown"Hello world!" = 3 tokens ["Hello", " world", "!"]
"artificial intelligence" = 4 tokens ["art", "ificial", " intelligence"]
"ChatGPT" = 2 tokens ["Chat", "GPT"]
"antidisestablishmentarianism" = 6 tokens ["ant", "idis", "establish", "ment", "arian", "ism"]
</code></pre>
<p><strong>Why not just use words?</strong></p>
<ul>
<li><p>Many languages don't have clear word boundaries (Chinese, Japanese)</p>
</li>
<li><p>Tokenization handles new/rare words better</p>
</li>
<li><p>More efficient processing</p>
</li>
<li><p>Handles numbers, punctuation, code, etc.</p>
</li>
</ul>
<h3 id="heading-how-tokenization-works"><strong>How Tokenization Works</strong></h3>
<p><strong>Step 1: Byte-Pair Encoding (BPE)</strong><br />The most common approach. The algorithm:</p>
<ol>
<li><p>Starts with individual characters</p>
</li>
<li><p>Finds the most frequently occurring pairs</p>
</li>
<li><p>Merges them into single tokens</p>
</li>
<li><p>Repeats until reaching the desired vocabulary size</p>
</li>
</ol>
<p><strong>Result:</strong> Common words = 1 token, rare words = multiple tokens</p>
<p><strong>Vocabulary sizes:</strong></p>
<ul>
<li><p><strong>GPT-3/4:</strong> ~50,257 tokens</p>
</li>
<li><p><strong>Claude:</strong> ~100,000 tokens</p>
</li>
<li><p><strong>LLaMA:</strong> ~32,000 tokens</p>
</li>
</ul>
<h3 id="heading-why-tokenization-matters-for-you"><strong>Why Tokenization Matters for You</strong></h3>
<p><strong>1. Cost Calculation</strong></p>
<p>API pricing is <strong>per token</strong>, not per word:</p>
<ul>
<li><p>OpenAI GPT-4: $0.03/1K input tokens, $0.06/1K output tokens</p>
</li>
<li><p>Claude Opus: $0.015/1K input tokens, $0.075/1K output tokens</p>
</li>
</ul>
<p><strong>Your 100-word prompt isn't 100 tokens—it's more like 130-150 tokens.</strong></p>
<p><strong>Pro tip:</strong> Use tokenization tools to check:</p>
<ul>
<li><p><a target="_blank" href="https://platform.openai.com/tokenizer">OpenAI Tokenizer</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/openai/tiktoken">TikToken (Python library)</a></p>
</li>
</ul>
<p><strong>2. Context Window Limits</strong></p>
<p>Models have <strong>token limits</strong>, not word limits:</p>
<ul>
<li><p>GPT-4: 8K, 32K, or 128K tokens</p>
</li>
<li><p>Claude 3.5: 200K tokens</p>
</li>
<li><p>Gemini 1.5 Pro: 1M tokens</p>
</li>
</ul>
<p><strong>Common mistake:</strong> "I can fit 100,000 words in Claude!"<br /><strong>Reality:</strong> 100,000 words = ~130,000-150,000 tokens—you'd exceed the limit.</p>
<p><strong>3. Token Efficiency</strong></p>
<p>Some ways of writing use fewer tokens:</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-sql">Unknown❌ Inefficient (many tokens): 
"The individual who is responsible for the management and oversight of..."

✅ Efficient (fewer tokens):
"The manager of..."
</code></pre>
<p><strong>But don't obsess over this.</strong> Clarity trumps token optimization except at massive scale.</p>
<p><strong>4. Why Some Words Are "Harder" Than Others</strong></p>
<p>Words that tokenize into more pieces are harder for the model:</p>
<p><strong>Easy (1 token):</strong> "the", "and", "computer", "science"<br /><strong>Harder (multiple tokens):</strong> "antidisestablishmentarianism", rare names, specialized jargon</p>
<p><strong>This is why:</strong></p>
<ul>
<li><p>Models sometimes misspell unusual names</p>
</li>
<li><p>They struggle with very rare technical terms</p>
</li>
<li><p>They're better with common vocabulary</p>
</li>
</ul>
<p><strong>5. Numbers and Code</strong></p>
<p>Numbers tokenize unpredictably:</p>
<pre><code class="lang-sql">Unknown"1234" might be ["123", "4"] or ["12", "34"] or ["1", "2", "3", "4"]
</code></pre>
<p><strong>This is why LLMs are bad at arithmetic</strong>—they're predicting token sequences, not calculating.</p>
<p><strong>Code tokenizes differently than prose:</strong></p>
<pre><code class="lang-sql">Pythondef calculate_total(items):
    return sum(items)
</code></pre>
<p>This might be 15-20 tokens, depending on how the tokenizer handles code syntax.</p>
<h3 id="heading-practical-tokenization-tips-for-prompting"><strong>Practical Tokenization Tips for Prompting</strong></h3>
<p><strong>1. Check your token count before submission</strong><br />Especially for long prompts approaching context limits.</p>
<p><strong>2. Be aware of token-heavy formats</strong></p>
<ul>
<li><p>JSON (lots of special characters)</p>
</li>
<li><p>Tables (formatting characters)</p>
</li>
<li><p>Code (syntax elements)</p>
</li>
</ul>
<p><strong>3. Don't pad unnecessarily</strong><br />This doesn't help and wastes tokens: "Please, if you could, maybe, possibly..."</p>
<p><strong>4. Use the model's native token count in API calls</strong><br />Most APIs return <code>usage.total_tokens</code>—monitor this for cost tracking.</p>
<h2 id="heading-part-3-context-windows-and-their-limitations"><strong>Part 3: Context Windows and Their Limitations</strong></h2>
<p><strong>The context window is the total amount of text (in tokens) a model can consider at once</strong>—including both your prompt and its response.</p>
<p>Think of it as the model's "working memory."</p>
<h3 id="heading-current-context-window-sizes"><strong>Current Context Window Sizes</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Model</td><td>Context Window</td><td>Practical Capacity</td></tr>
</thead>
<tbody>
<tr>
<td>GPT-3.5 Turbo</td><td>16K tokens</td><td>~12,000 words</td></tr>
<tr>
<td>GPT-4</td><td>8K tokens</td><td>~6,000 words</td></tr>
<tr>
<td>GPT-4 (extended)</td><td>32K tokens</td><td>~24,000 words</td></tr>
<tr>
<td>GPT-4 Turbo</td><td>128K tokens</td><td>~96,000 words</td></tr>
<tr>
<td>Claude 3.5 Sonnet</td><td>200K tokens</td><td>~150,000 words</td></tr>
<tr>
<td>Claude 3 Opus</td><td>200K tokens</td><td>~150,000 words</td></tr>
<tr>
<td>Gemini 1.5 Pro</td><td>1M tokens</td><td>~750,000 words</td></tr>
<tr>
<td>LLaMA 3</td><td>8K-32K tokens</td><td>~6,000-24,000 words</td></tr>
</tbody>
</table>
</div><p><strong>The race is on:</strong> Companies are competing to offer larger context windows.</p>
<h3 id="heading-why-context-windows-matter"><strong>Why Context Windows Matter</strong></h3>
<p><strong>1. They Define What You Can Input</strong></p>
<p>Want to analyze an entire book? You need:</p>
<ul>
<li><p><em>"The Great Gatsby"</em> = ~47,000 words = ~63,000 tokens</p>
</li>
<li><p><strong>Required:</strong> 128K+ context window</p>
</li>
</ul>
<p><strong>2. They Include the Response</strong></p>
<p>If you have 8K tokens total:</p>
<ul>
<li><p>Your prompt uses 6K tokens</p>
</li>
<li><p>Only 2K tokens remain for the response</p>
</li>
<li><p>That's ~1,500 words maximum output</p>
</li>
</ul>
<p><strong>Common issue:</strong> Long prompts with insufficient room for responses.</p>
<p><strong>3. They Affect Attention Quality</strong></p>
<p><strong>Lost in the Middle Problem:</strong> Research shows models pay more attention to:</p>
<ul>
<li><p>The beginning of the context</p>
</li>
<li><p>The end of the context</p>
</li>
<li><p><strong>Less attention</strong> to information in the middle</p>
</li>
</ul>
<p><strong>Study findings (Liu et al., 2023):</strong></p>
<ul>
<li><p>Information at the start: ~70% retrieval accuracy</p>
</li>
<li><p>Information in the middle: ~40% retrieval accuracy</p>
</li>
<li><p>Information at the end: ~65% retrieval accuracy</p>
</li>
</ul>
<p><strong>Practical implication:</strong> Put critical information at the beginning or end of your prompt.</p>
<h3 id="heading-context-window-limitations"><strong>Context Window Limitations</strong></h3>
<p><strong>1. Processing Time</strong></p>
<p>Larger contexts = longer processing:</p>
<ul>
<li><p>8K tokens: ~2-5 seconds</p>
</li>
<li><p>128K tokens: ~10-30 seconds</p>
</li>
<li><p>1M tokens: Several minutes</p>
</li>
</ul>
<p><strong>2. Cost</strong></p>
<p>You pay for every token in the context:</p>
<pre><code class="lang-sql">UnknownExample: Analyzing a 100K token document
Input cost: 100K tokens × $0.015/1K = $1.50 per query
If you query 1,000 times: $1,500
</code></pre>
<p><strong>3. Quality Degradation</strong></p>
<p>Models perform worse as context windows fill:</p>
<ul>
<li><p><strong>Needle-in-haystack tests</strong> show accuracy drops significantly beyond 50% capacity</p>
</li>
<li><p>Complex reasoning degrades faster than simple retrieval</p>
</li>
</ul>
<p><strong>4. The Recency Bias</strong></p>
<p>Models weight more recent information higher. In long contexts:</p>
<ul>
<li><p>Earlier information may be "forgotten"</p>
</li>
<li><p>Later information dominates</p>
</li>
</ul>
<h3 id="heading-strategies-for-working-within-context-limits"><strong>Strategies for Working Within Context Limits</strong></h3>
<p><strong>1. Summarization</strong></p>
<pre><code class="lang-sql">UnknownStep 1: Summarize document in chunks
Step 2: Combine summaries
Step 3: <span class="hljs-keyword">Analyze</span> <span class="hljs-keyword">final</span> summary
</code></pre>
<p><strong>2. Sliding Window</strong><br />Process text in overlapping chunks:</p>
<pre><code class="lang-sql">UnknownChunk 1: Tokens 0-8000
Chunk 2: Tokens 6000-14000
Chunk 3: Tokens 12000-20000
</code></pre>
<p><strong>3. Retrieval-Augmented Generation (RAG)</strong><br />Don't put everything in context—retrieve only relevant sections:</p>
<pre><code class="lang-sql">Unknown1. Index all documents
2. User asks question
3. Retrieve top 5 relevant passages
4. Feed only those to the model
</code></pre>
<p><strong>We'll cover this in detail in Post #18.</strong></p>
<p><strong>4. Prompt Compression</strong><br />Remove unnecessary verbosity:</p>
<pre><code class="lang-sql">Unknown❌ "I would really appreciate it if you could possibly <span class="hljs-keyword">help</span> me understand...<span class="hljs-string">"
✅ "</span>Explain...<span class="hljs-string">"</span>
</code></pre>
<p><strong>5. Stateful Conversations</strong><br />For chatbots, summarize history periodically:</p>
<pre><code class="lang-sql">UnknownEvery 10 messages:
- Summarize conversation so far
- <span class="hljs-keyword">Replace</span> <span class="hljs-keyword">old</span> messages <span class="hljs-keyword">with</span> summary
- Continue <span class="hljs-keyword">with</span> compressed <span class="hljs-keyword">context</span>
</code></pre>
<h3 id="heading-the-future-of-context-windows"><strong>The Future of Context Windows</strong></h3>
<p><strong>Trends to watch:</strong></p>
<ul>
<li><p><strong>Infinite context:</strong> Research into models without fixed context limits</p>
</li>
<li><p><strong>Hierarchical attention:</strong> Models that process different context levels differently</p>
</li>
<li><p><strong>External memory:</strong> Models that can read/write to external storage</p>
</li>
<li><p><strong>Selective attention:</strong> Models that automatically focus on relevant parts</p>
</li>
</ul>
<p><strong>But for now:</strong> Work within the constraints. They're getting better, but they're not going away soon.</p>
<h2 id="heading-part-4-parameters-that-control-output-temperature-top-p-and-more"><strong>Part 4: Parameters That Control Output (Temperature, Top-p, and More)</strong></h2>
<p>These are the knobs and dials that change how the AI generates text. Understanding them is like understanding shutter speed and aperture in photography—<strong>technical knowledge that unlocks creative control.</strong></p>
<h3 id="heading-temperature-the-creativity-dial"><strong>Temperature: The Creativity Dial</strong></h3>
<p><strong>Definition:</strong> Controls randomness in token selection. Range: 0.0 to 2.0 (practical range: 0.0 to 1.5)</p>
<p><strong>How it works:</strong></p>
<p>The model calculates probabilities for all possible next tokens:</p>
<pre><code class="lang-sql">Unknown"The sky is ____"
- blue: 40% probability
- clear: 25% probability
- cloudy: 20% probability
- falling: 0.1% probability
</code></pre>
<p><strong>Temperature modifies these probabilities:</strong></p>
<p><strong>Temperature = 0 (Deterministic)</strong></p>
<ul>
<li><p>Always picks the highest probability token</p>
</li>
<li><p>Same prompt = same response every time</p>
</li>
<li><p>Maximum consistency, zero creativity</p>
</li>
</ul>
<p><strong>Temperature = 0.7 (Default/Balanced)</strong></p>
<ul>
<li><p>Mostly picks high-probability tokens</p>
</li>
<li><p>Some variation allowed</p>
</li>
<li><p>Balance of consistency and creativity</p>
</li>
</ul>
<p><strong>Temperature = 1.5 (High Creativity)</strong></p>
<ul>
<li><p>Flattens probability distribution</p>
</li>
<li><p>Low-probability tokens get more chances</p>
</li>
<li><p>More creative, more unpredictable</p>
</li>
<li><p>Higher risk of nonsense</p>
</li>
</ul>
<p><strong>Visual representation:</strong></p>
<pre><code class="lang-sql">UnknownLow temp (0.2):   ████████ blue
                  ██ clear
                  █ cloudy

High temp (1.5):  ████ blue
                  ███ clear
                  ██ cloudy
                  █ falling (now more likely!)
</code></pre>
<h3 id="heading-when-to-use-different-temperatures"><strong>When to Use Different Temperatures</strong></h3>
<p><strong>Temperature 0 - 0.3: Maximum Precision</strong></p>
<ul>
<li><p>✅ Factual Q&amp;A</p>
</li>
<li><p>✅ Code generation</p>
</li>
<li><p>✅ Data extraction</p>
</li>
<li><p>✅ Classification tasks</p>
</li>
<li><p>✅ Legal/medical applications</p>
</li>
<li><p>✅ Anything requiring consistency</p>
</li>
</ul>
<p><strong>Example prompt:</strong></p>
<pre><code class="lang-sql">UnknownTemperature: 0
Extract the following from this email: sender, date, main request, urgency level.
</code></pre>
<p><strong>Temperature 0.7 - 1.0: Balanced (Default)</strong></p>
<ul>
<li><p>✅ General conversation</p>
</li>
<li><p>✅ Explanations</p>
</li>
<li><p>✅ Content writing</p>
</li>
<li><p>✅ Problem-solving</p>
</li>
<li><p>✅ Most use cases</p>
</li>
</ul>
<p><strong>Example prompt:</strong></p>
<pre><code class="lang-sql">UnknownTemperature: 0.7
Write a professional email declining a meeting request.
</code></pre>
<p><strong>Temperature 1.0 - 1.5: Maximum Creativity</strong></p>
<ul>
<li><p>✅ Creative writing</p>
</li>
<li><p>✅ Brainstorming</p>
</li>
<li><p>✅ Unique content generation</p>
</li>
<li><p>✅ Exploring unconventional ideas</p>
</li>
<li><p>❌ Not for factual accuracy</p>
</li>
</ul>
<p><strong>Example prompt:</strong></p>
<pre><code class="lang-sql">UnknownTemperature: 1.3
Generate 20 unusual marketing campaign ideas for a artisanal pickle company.
</code></pre>
<p><strong>Temperature 1.5 - 2.0: Experimental</strong></p>
<ul>
<li><p>Often produces nonsensical or incoherent results</p>
</li>
<li><p>Rarely useful in practice</p>
</li>
<li><p>Fun for experimentation</p>
</li>
</ul>
<h3 id="heading-top-p-nucleus-sampling-the-alternative-control"><strong>Top-p (Nucleus Sampling): The Alternative Control</strong></h3>
<p><strong>Definition:</strong> Instead of temperature, controls which tokens are considered by cumulative probability. Range: 0.0 to 1.0</p>
<p><strong>How it works:</strong></p>
<p><strong>Top-p = 0.9</strong> means "consider tokens until their cumulative probability reaches 90%"</p>
<pre><code class="lang-sql">UnknownToken probabilities:
- blue: 40% (cumulative: 40%)
- clear: 25% (cumulative: 65%)
- cloudy: 20% (cumulative: 85%)
- bright: 10% (cumulative: 95%) ← stops here at top-p=0.9
- falling: 5% (excluded)
</code></pre>
<p><strong>The difference from temperature:</strong></p>
<ul>
<li><p><strong>Temperature:</strong> Adjusts ALL probabilities</p>
</li>
<li><p><strong>Top-p:</strong> Excludes low-probability tokens entirely</p>
</li>
</ul>
<p><strong>Typical values:</strong></p>
<ul>
<li><p><strong>Top-p = 0.1:</strong> Very conservative (top 10% of probability mass)</p>
</li>
<li><p><strong>Top-p = 0.5:</strong> Moderately conservative</p>
</li>
<li><p><strong>Top-p = 0.9:</strong> Balanced (default for many models)</p>
</li>
<li><p><strong>Top-p = 1.0:</strong> Consider all tokens (no filtering)</p>
</li>
</ul>
<p><strong>Pro tip:</strong> Use EITHER temperature OR top-p, not both. Combining them can produce unexpected results.</p>
<h3 id="heading-other-important-parameters"><strong>Other Important Parameters</strong></h3>
<p><strong>Max Tokens (Max Length)</strong></p>
<p><strong>Definition:</strong> Maximum number of tokens in the response.</p>
<p><strong>Use cases:</strong></p>
<ul>
<li><p>Controlling costs (shorter = cheaper)</p>
</li>
<li><p>Enforcing brevity</p>
</li>
<li><p>Ensuring responses fit in UI elements</p>
</li>
</ul>
<p><strong>Common mistake:</strong></p>
<pre><code class="lang-sql">Unknown❌ Setting max_tokens = 50 and asking for a detailed essay
Result: Response will be cut off mid-sentence
</code></pre>
<p><strong>Best practice:</strong></p>
<pre><code class="lang-sql">Unknown✅ <span class="hljs-keyword">Set</span> max_tokens <span class="hljs-keyword">to</span> slightly more <span class="hljs-keyword">than</span> expected <span class="hljs-keyword">output</span>
<span class="hljs-keyword">For</span> <span class="hljs-number">500</span>-word response: max_tokens = <span class="hljs-number">700</span><span class="hljs-number">-800</span>
</code></pre>
<p><strong>Frequency Penalty</strong></p>
<p><strong>Definition:</strong> Reduces likelihood of repeating the same tokens. Range: -2.0 to 2.0</p>
<p><strong>How it works:</strong></p>
<ul>
<li><p><strong>0:</strong> No penalty (default)</p>
</li>
<li><p><strong>Positive values (0.5 - 1.0):</strong> Discourages repetition</p>
</li>
<li><p><strong>Negative values:</strong> Encourages repetition (rarely useful)</p>
</li>
</ul>
<p><strong>Use when:</strong></p>
<ul>
<li><p>Model is being repetitive</p>
</li>
<li><p>You want more diverse vocabulary</p>
</li>
<li><p>Generating lists or variations</p>
</li>
</ul>
<p><strong>Example:</strong></p>
<pre><code class="lang-sql">UnknownWithout frequency penalty:
"The product is great. The product is amazing. The product is fantastic."

<span class="hljs-keyword">With</span> frequency penalty (<span class="hljs-number">0.7</span>):
<span class="hljs-string">"The product is great. It's amazing. This is fantastic."</span>
</code></pre>
<p><strong>Presence Penalty</strong></p>
<p><strong>Definition:</strong> Encourages the model to introduce new topics. Range: -2.0 to 2.0</p>
<p><strong>How it works:</strong></p>
<ul>
<li><p><strong>0:</strong> No penalty (default)</p>
</li>
<li><p><strong>Positive values:</strong> Encourages talking about new concepts</p>
</li>
<li><p><strong>Negative values:</strong> Encourages staying on topic (rarely used)</p>
</li>
</ul>
<p><strong>Use when:</strong></p>
<ul>
<li><p>Model keeps circling back to same points</p>
</li>
<li><p>You want broader coverage</p>
</li>
<li><p>Brainstorming diverse ideas</p>
</li>
</ul>
<p><strong>The difference:</strong></p>
<ul>
<li><p><strong>Frequency penalty:</strong> "Don't use the same WORDS repeatedly"</p>
</li>
<li><p><strong>Presence penalty:</strong> "Don't talk about the same TOPICS repeatedly"</p>
</li>
</ul>
<p><strong>Stop Sequences</strong></p>
<p><strong>Definition:</strong> Tokens that signal the model to stop generating.</p>
<p><strong>Common use:</strong></p>
<pre><code class="lang-sql">Pythonstop_sequences = ["\n\n", "<span class="hljs-comment">###", "END"]</span>
</code></pre>
<p><strong>Use cases:</strong></p>
<ul>
<li><p>Generating until specific delimiter</p>
</li>
<li><p>Stopping at natural breakpoints</p>
</li>
<li><p>Controlling output format</p>
</li>
</ul>
<p><strong>Example:</strong></p>
<pre><code class="lang-sql">UnknownPrompt: "List 5 ideas. <span class="hljs-keyword">Use</span> <span class="hljs-comment">### to separate each idea."</span>
<span class="hljs-keyword">Stop</span> <span class="hljs-keyword">sequence</span>: <span class="hljs-string">"###"</span>

<span class="hljs-keyword">Output</span>: <span class="hljs-string">"Idea 1: Product launch ###"</span>
(stops, preventing premature continuation)
</code></pre>
<h3 id="heading-parameter-combinations-for-common-tasks"><strong>Parameter Combinations for Common Tasks</strong></h3>
<p><strong>1. Factual Q&amp;A</strong></p>
<pre><code class="lang-sql">JSON{
  "temperature": 0.1,
  "max_tokens": 500,
  "top_p": 0.9,
  "frequency_penalty": 0,
  "presence_penalty": 0
}
</code></pre>
<p><strong>2. Creative Writing</strong></p>
<pre><code class="lang-sql">JSON{
  "temperature": 1.0,
  "max_tokens": 2000,
  "top_p": 0.95,
  "frequency_penalty": 0.5,
  "presence_penalty": 0.3
}
</code></pre>
<p><strong>3. Code Generation</strong></p>
<pre><code class="lang-sql">JSON{
  "temperature": 0.2,
  "max_tokens": 1500,
  "top_p": 0.9,
  "frequency_penalty": 0.2,
  "presence_penalty": 0
}
</code></pre>
<p><strong>4. Brainstorming</strong></p>
<pre><code class="lang-sql">JSON{
  "temperature": 1.2,
  "max_tokens": 1000,
  "top_p": 0.95,
  "frequency_penalty": 0.8,
  "presence_penalty": 0.6
}
</code></pre>
<p><strong>5. Conversation</strong></p>
<pre><code class="lang-sql">JSON{
  "temperature": 0.7,
  "max_tokens": 800,
  "top_p": 0.9,
  "frequency_penalty": 0.3,
  "presence_penalty": 0.1
}
</code></pre>
<h3 id="heading-experimentation-framework"><strong>Experimentation Framework</strong></h3>
<p><strong>To find optimal parameters:</strong></p>
<ol>
<li><p><strong>Start with defaults</strong> (temp: 0.7, top_p: 0.9)</p>
</li>
<li><p><strong>Test one variable at a time</strong></p>
</li>
<li><p><strong>Run multiple times</strong> (at temp &gt; 0, outputs vary)</p>
</li>
<li><p><strong>Measure results</strong> against your criteria</p>
</li>
<li><p><strong>Document what works</strong></p>
</li>
</ol>
<p><strong>Pro tip:</strong> Create a spreadsheet tracking:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Task</td><td>Temperature</td><td>Top_p</td><td>Max_tokens</td><td>Quality (1-10)</td><td>Notes</td></tr>
</thead>
<tbody>
<tr>
<td></td></tr>
</tbody>
</table>
</div><h2 id="heading-part-5-different-model-architectures-gpt-claude-gemini-llama-and-more"><strong>Part 5: Different Model Architectures (GPT, Claude, Gemini, LLaMA, and More)</strong></h2>
<p>Not all LLMs are created equal. Understanding the landscape helps you choose the right tool for the job.</p>
<h3 id="heading-the-major-players-a-comparative-overview"><strong>The Major Players: A Comparative Overview</strong></h3>
<p><strong>OpenAI GPT Series</strong></p>
<p><strong>GPT-3.5 Turbo</strong></p>
<ul>
<li><p><strong>Size:</strong> 175B parameters</p>
</li>
<li><p><strong>Context:</strong> 16K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Fast, cheap, good for most tasks</p>
</li>
<li><p><strong>Weaknesses:</strong> Less capable than GPT-4, more hallucinations</p>
</li>
<li><p><strong>Best for:</strong> High-volume, cost-sensitive applications</p>
</li>
<li><p><strong>Cost:</strong> $0.0015/1K input, $0.002/1K output</p>
</li>
</ul>
<p><strong>GPT-4</strong></p>
<ul>
<li><p><strong>Size:</strong> ~1.7T parameters (mixture of experts)</p>
</li>
<li><p><strong>Context:</strong> 8K, 32K, or 128K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Best reasoning, complex tasks, instruction following</p>
</li>
<li><p><strong>Weaknesses:</strong> Expensive, slower</p>
</li>
<li><p><strong>Best for:</strong> Complex analysis, high-stakes content, reasoning</p>
</li>
<li><p><strong>Cost:</strong> $0.03/1K input, $0.06/1K output</p>
</li>
</ul>
<p><strong>GPT-4 Turbo</strong></p>
<ul>
<li><p>Updated GPT-4 with better performance and longer context</p>
</li>
<li><p><strong>Context:</strong> 128K tokens</p>
</li>
<li><p><strong>Cost:</strong> Slightly cheaper than GPT-4</p>
</li>
</ul>
<p><strong>GPT-4V (Vision)</strong></p>
<ul>
<li><p>Multimodal: text + images</p>
</li>
<li><p>Can analyze screenshots, diagrams, photos</p>
</li>
<li><p>Same core capabilities as GPT-4</p>
</li>
</ul>
<p><strong>Anthropic Claude Series</strong></p>
<p><strong>Claude 3 Haiku</strong></p>
<ul>
<li><p><strong>Size:</strong> Smallest in Claude 3 family</p>
</li>
<li><p><strong>Context:</strong> 200K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Fastest, cheapest Claude, massive context</p>
</li>
<li><p><strong>Best for:</strong> Simple tasks needing large context</p>
</li>
<li><p><strong>Cost:</strong> $0.00025/1K input, $0.00125/1K output</p>
</li>
</ul>
<p><strong>Claude 3 Sonnet</strong></p>
<ul>
<li><p><strong>Size:</strong> Mid-tier</p>
</li>
<li><p><strong>Context:</strong> 200K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Balance of capability and cost</p>
</li>
<li><p><strong>Best for:</strong> Most production use cases</p>
</li>
<li><p><strong>Cost:</strong> $0.003/1K input, $0.015/1K output</p>
</li>
</ul>
<p><strong>Claude 3.5 Sonnet</strong></p>
<ul>
<li><p>Updated Sonnet with improved performance</p>
</li>
<li><p><strong>Better at:</strong> Coding, reasoning, nuanced tasks</p>
</li>
<li><p><strong>Context:</strong> 200K tokens</p>
</li>
</ul>
<p><strong>Claude 3 Opus</strong></p>
<ul>
<li><p><strong>Size:</strong> Largest Claude model</p>
</li>
<li><p><strong>Context:</strong> 200K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Highest quality, excellent at complex reasoning</p>
</li>
<li><p><strong>Best for:</strong> Challenging tasks, long-document analysis</p>
</li>
<li><p><strong>Cost:</strong> $0.015/1K input, $0.075/1K output</p>
</li>
</ul>
<p><strong>Claude's Unique Characteristics:</strong></p>
<ul>
<li><p>✅ Constitutional AI (safety-focused training)</p>
</li>
<li><p>✅ Better at declining inappropriate requests</p>
</li>
<li><p>✅ Excellent at long-form analysis</p>
</li>
<li><p>✅ Strong performance on nuanced tasks</p>
</li>
<li><p>✅ Less verbose than GPT-4 (more concise)</p>
</li>
</ul>
<p><strong>Google Gemini Series</strong></p>
<p><strong>Gemini 1.0 Pro</strong></p>
<ul>
<li><p><strong>Context:</strong> 32K tokens</p>
</li>
<li><p><strong>Strengths:</strong> Multimodal (text, images, audio, video)</p>
</li>
<li><p><strong>Best for:</strong> Applications needing multiple input types</p>
</li>
</ul>
<p><strong>Gemini 1.5 Pro</strong></p>
<ul>
<li><p><strong>Context:</strong> 1M tokens (largest available)</p>
</li>
<li><p><strong>Strengths:</strong> Analyzing entire codebases, books, video transcripts</p>
</li>
<li><p><strong>Weaknesses:</strong> Long context = slower, expensive</p>
</li>
<li><p><strong>Best for:</strong> Tasks requiring massive context</p>
</li>
</ul>
<p><strong>Gemini Ultra</strong></p>
<ul>
<li><p><strong>Largest Gemini model</strong></p>
</li>
<li><p><strong>Competitive with GPT-4 and Claude 3 Opus</strong></p>
</li>
<li><p><strong>Multimodal capabilities</strong></p>
</li>
</ul>
<p><strong>Gemini's Unique Characteristics:</strong></p>
<ul>
<li><p>✅ Native multimodal design (not bolted-on vision)</p>
</li>
<li><p>✅ Longest context window (1M tokens)</p>
</li>
<li><p>✅ Strong at technical/scientific tasks</p>
</li>
<li><p>✅ Deep Google integration</p>
</li>
</ul>
<p><strong>Meta LLaMA Series</strong></p>
<p><strong>LLaMA 2</strong></p>
<ul>
<li><p><strong>Sizes:</strong> 7B, 13B, 70B parameters</p>
</li>
<li><p><strong>License:</strong> Open source (with usage restrictions)</p>
</li>
<li><p><strong>Context:</strong> 4K-32K tokens depending on version</p>
</li>
<li><p><strong>Strengths:</strong> Can self-host, customize, fine-tune</p>
</li>
<li><p><strong>Best for:</strong> Privacy-sensitive applications, customization needs</p>
</li>
</ul>
<p><strong>LLaMA 3</strong></p>
<ul>
<li><p><strong>Improved over LLaMA 2</strong></p>
</li>
<li><p><strong>Better multilingual support</strong></p>
</li>
<li><p><strong>Enhanced reasoning</strong></p>
</li>
</ul>
<p><strong>Open Source Advantages:</strong></p>
<ul>
<li><p>✅ Self-host (data stays internal)</p>
</li>
<li><p>✅ Fine-tune for specific domains</p>
</li>
<li><p>✅ No per-token costs (just compute)</p>
</li>
<li><p>✅ Full control over model behavior</p>
</li>
</ul>
<p><strong>Open Source Disadvantages:</strong></p>
<ul>
<li><p>❌ Requires infrastructure</p>
</li>
<li><p>❌ Need ML expertise for deployment</p>
</li>
<li><p>❌ Generally less capable than frontier models</p>
</li>
<li><p>❌ You handle all safety/moderation</p>
</li>
</ul>
<h3 id="heading-other-notable-models"><strong>Other Notable Models</strong></h3>
<p><strong>Mistral AI</strong></p>
<ul>
<li><p><strong>Mistral 7B:</strong> Efficient, competitive with 13B models</p>
</li>
<li><p><strong>Mixtral 8x7B:</strong> Mixture of experts, strong performance</p>
</li>
<li><p><strong>Open source:</strong> Similar benefits to LLaMA</p>
</li>
</ul>
<p><strong>Cohere</strong></p>
<ul>
<li><p><strong>Command:</strong> Optimized for business applications</p>
</li>
<li><p><strong>Strong at:</strong> Classification, embeddings, search</p>
</li>
</ul>
<p><strong>AI21 Jurassic</strong></p>
<ul>
<li><p><strong>Jurassic-2:</strong> Various sizes</p>
</li>
<li><p><strong>Focus:</strong> Multi-language, long-form content</p>
</li>
</ul>
<h3 id="heading-specialized-models-worth-knowing"><strong>Specialized Models Worth Knowing</strong></h3>
<p><strong>Code-Specific Models</strong></p>
<p><strong>CodeLlama (Meta)</strong></p>
<ul>
<li><p>Based on LLaMA, trained on code</p>
</li>
<li><p>Better at programming than base LLaMA</p>
</li>
</ul>
<p><strong>StarCoder</strong></p>
<ul>
<li><p>Open source, 15B parameters</p>
</li>
<li><p>Trained on 80+ programming languages</p>
</li>
</ul>
<p><strong>Phind-CodeLlama</strong></p>
<ul>
<li>Fine-tuned CodeLlama for development</li>
</ul>
<p><strong>Embedding Models</strong></p>
<p><strong>OpenAI text-embedding-ada-002</strong></p>
<ul>
<li><p>For semantic search, clustering</p>
</li>
<li><p>1536 dimensions</p>
</li>
</ul>
<p><strong>Cohere Embed</strong></p>
<ul>
<li><p>Multilingual embeddings</p>
</li>
<li><p>Various size options</p>
</li>
</ul>
<p><strong>Sentence Transformers</strong></p>
<ul>
<li><p>Open source embedding models</p>
</li>
<li><p>Self-hostable</p>
</li>
</ul>
<h3 id="heading-model-comparison-matrix"><strong>Model Comparison Matrix</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>GPT-4</td><td>Claude 3 Opus</td><td>Gemini Ultra</td><td>LLaMA 2 70B</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Reasoning</strong></td><td>★★★★★</td><td>★★★★★</td><td>★★★★☆</td><td>★★★☆☆</td></tr>
<tr>
<td><strong>Coding</strong></td><td>★★★★★</td><td>★★★★★</td><td>★★★★☆</td><td>★★★☆☆</td></tr>
<tr>
<td><strong>Creative Writing</strong></td><td>★★★★★</td><td>★★★★★</td><td>★★★★☆</td><td>★★★★☆</td></tr>
<tr>
<td><strong>Long Context</strong></td><td>★★★★☆</td><td>★★★★★</td><td>★★★★★</td><td>★★☆☆☆</td></tr>
<tr>
<td><strong>Speed</strong></td><td>★★★☆☆</td><td>★★★★☆</td><td>★★★☆☆</td><td>★★★★★</td></tr>
<tr>
<td><strong>Cost Efficiency</strong></td><td>★★☆☆☆</td><td>★★★☆☆</td><td>★★★☆☆</td><td>★★★★★</td></tr>
<tr>
<td><strong>Multilingual</strong></td><td>★★★★☆</td><td>★★★★☆</td><td>★★★★★</td><td>★★★☆☆</td></tr>
<tr>
<td><strong>Safety</strong></td><td>★★★★☆</td><td>★★★★★</td><td>★★★★☆</td><td>★★★☆☆</td></tr>
</tbody>
</table>
</div><h3 id="heading-architectural-differences-that-matter"><strong>Architectural Differences That Matter</strong></h3>
<p><strong>1. Mixture of Experts (MoE)</strong></p>
<p><strong>Used by:</strong> GPT-4, Mixtral</p>
<p><strong>How it works:</strong></p>
<ul>
<li><p>Multiple smaller "expert" models</p>
</li>
<li><p>Router network decides which experts to activate</p>
</li>
<li><p>Only activate relevant experts for each token</p>
</li>
</ul>
<p><strong>Advantages:</strong></p>
<ul>
<li><p>More efficient than single large model</p>
</li>
<li><p>Specialization (different experts for different domains)</p>
</li>
</ul>
<p><strong>Disadvantages:</strong></p>
<ul>
<li><p>More complex to train and deploy</p>
</li>
<li><p>Can be inconsistent</p>
</li>
</ul>
<p><strong>2. Constitutional AI</strong></p>
<p><strong>Used by:</strong> Claude series</p>
<p><strong>How it works:</strong></p>
<ul>
<li><p>Model is trained with explicit "constitution" of principles</p>
</li>
<li><p>Self-critiques and revises responses</p>
</li>
<li><p>Trained to explain refusals</p>
</li>
</ul>
<p><strong>Result:</strong> Claude tends to be more careful, explicit about limitations</p>
<p><strong>3. Retrieval-Enhanced Generation</strong></p>
<p><strong>Used by:</strong> Some specialized models, Perplexity AI</p>
<p><strong>How it works:</strong></p>
<ul>
<li><p>Model can search external sources</p>
</li>
<li><p>Grounds responses in retrieved information</p>
</li>
<li><p>Provides citations</p>
</li>
</ul>
<p><strong>Advantage:</strong> Reduced hallucinations, up-to-date information</p>
<p><strong>4. Multimodal Architecture</strong></p>
<p><strong>Native multimodal (Gemini):</strong></p>
<ul>
<li><p>Trained on text, images, audio, video simultaneously</p>
</li>
<li><p>Better cross-modal understanding</p>
</li>
</ul>
<p><strong>Adapter multimodal (GPT-4V):</strong></p>
<ul>
<li><p>Base model + vision adapter</p>
</li>
<li><p>Still very capable but different architecture</p>
</li>
</ul>
<h3 id="heading-choosing-the-right-model-decision-framework"><strong>Choosing the Right Model: Decision Framework</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>GPT-4</strong></td><td><strong>Claude 3 Opus</strong></td><td><strong>Claude 3.5 Sonnet</strong></td><td><strong>GPT-3.5 Turbo</strong></td><td><strong>Gemini 1.5 Pro</strong></td><td><strong>LLaMA/Mistra</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Maximum quality is critical</td><td>Long-document analysis (200K context)</td><td>Need strong coding assistance</td><td>High volume, cost matters</td><td>Need 1M token context</td><td>Data privacy is critical (self-host)</td></tr>
<tr>
<td>Complex reasoning required</td><td>Nuanced, thoughtful responses needed</td><td>Balance of cost and quality</td><td>Simpler tasks</td><td>Analyzing entire codebases/books</td><td>Need to fine-tune</td></tr>
<tr>
<td>Budget allows</td><td>Safety/ethics are paramount</td><td>Production applications</td><td>Speed is priority</td><td>Multimodal inputs</td><td>High volume (no per-token cost)</td></tr>
<tr>
<td>Tasks are high-stakes</td><td>You prefer more concise outputs</td><td>Large context helpful but not required</td><td>Good enough &gt; perfect</td><td>Google ecosystem integration</td><td>Have ML infrastructure</td></tr>
</tbody>
</table>
</div><h3 id="heading-model-performance-on-common-tasks"><strong>Model Performance on Common Tasks</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Tasks</strong></td><td><strong>Models</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Code Generation</strong></td><td>Claude 3.5 Sonnet (best),GPT-4,GPT-3.5 Turbo,LLaMA 2 70B</td></tr>
<tr>
<td><strong>Creative Writing</strong></td><td>Claude 3 Opus,GPT-4,Claude 3.5 Sonnet,GPT-3.5 Turbo</td></tr>
<tr>
<td><strong>Factual Q&amp;A</strong></td><td>GPT-4,Claude 3 Opus,Gemini Ultra, Claude 3.5 Sonnet</td></tr>
<tr>
<td><strong>Long-Document Analysis</strong></td><td>Claude 3 Opus (200K), Gemini 1.5 Pro (1M), GPT-4 Turbo (128K), Claude 3.5 Sonnet (200K)</td></tr>
<tr>
<td><strong>Cost per Quality</strong></td><td>Claude 3.5 Sonnet, GPT-3.5 Turbo, Claude 3 Haiku, Gemini Pro</td></tr>
<tr>
<td><strong>Reasoning &amp; Logic</strong></td><td>GPT-4, Claude 3 Opus, Claude 3.5 Sonnet, Gemini Ultra</td></tr>
</tbody>
</table>
</div><h2 id="heading-part-6-putting-it-all-together"><strong>Part 6: Putting It All Together</strong></h2>
<h3 id="heading-your-model-selection-worksheet"><strong>Your Model Selection Worksheet</strong></h3>
<p>Answer these questions to choose the right model:</p>
<p><strong>1. What's your primary task?</strong></p>
<ul>
<li><p>Simple Q&amp;A → GPT-3.5 Turbo, Claude Haiku</p>
</li>
<li><p>Complex reasoning → GPT-4, Claude 3 Opus</p>
</li>
<li><p>Coding → Claude 3.5 Sonnet, GPT-4</p>
</li>
<li><p>Creative writing → Claude 3 Opus, GPT-4</p>
</li>
<li><p>Document analysis → Claude 3 Opus, Gemini 1.5 Pro</p>
</li>
</ul>
<p><strong>2. What's your context requirement?</strong></p>
<ul>
<li><p>&lt; 8K tokens → Any model</p>
</li>
<li><p>8K-32K tokens → GPT-4, Claude, Gemini Pro</p>
</li>
<li><p>32K-200K tokens → Claude 3, GPT-4 Turbo</p>
</li>
<li><p>200K+ tokens → Gemini 1.5 Pro</p>
</li>
</ul>
<p><strong>3. What's your volume?</strong></p>
<ul>
<li><p>Low (&lt; 1M tokens/month) → Use best quality</p>
</li>
<li><p>Medium (1M-10M) → Balance cost/quality</p>
</li>
<li><p>High (&gt; 10M) → Cost-optimize or self-host</p>
</li>
</ul>
<p><strong>4. What's your budget per 1K tokens?</strong></p>
<ul>
<li><p>&lt; $0.01 → GPT-3.5, Claude Haiku, self-host</p>
</li>
<li><p>$0.01-$0.05 → Claude 3.5 Sonnet, GPT-4 Turbo</p>
</li>
<li><blockquote>
<p>$0.05 → GPT-4, Claude 3 Opus (when needed)</p>
</blockquote>
</li>
</ul>
<p><strong>5. What's your latency requirement?</strong></p>
<ul>
<li><p>Real-time (&lt;2s) → GPT-3.5 Turbo, Claude Haiku</p>
</li>
<li><p>Interactive (&lt;5s) → Most models</p>
</li>
<li><p>Batch processing → Any model, optimize for cost</p>
</li>
</ul>
<p><strong>6. Do you need special capabilities?</strong></p>
<ul>
<li><p>Vision → GPT-4V, Gemini</p>
</li>
<li><p>Massive context → Gemini 1.5 Pro</p>
</li>
<li><p>Self-hosting → LLaMA, Mistral</p>
</li>
<li><p>Maximum safety → Claude 3</p>
</li>
</ul>
<h3 id="heading-practical-exercises"><strong>Practical Exercises</strong></h3>
<p><strong>Exercise 1: Token Counting</strong></p>
<p>Test these prompts in a tokenizer:</p>
<ol>
<li><p>"Hello world"</p>
</li>
<li><p>"The quick brown fox jumps over the lazy dog"</p>
</li>
<li><p>Your name (if it's unusual, see how it tokenizes)</p>
</li>
<li><p>A sentence in another language you speak</p>
</li>
<li><p>A code snippet</p>
</li>
</ol>
<p><strong>What did you learn about tokenization?</strong></p>
<p><strong>Exercise 2: Temperature Experiments</strong></p>
<p>Use the same prompt with different temperatures:</p>
<p><strong>Prompt:</strong> "Write a product description for wireless headphones"</p>
<p>Try:</p>
<ul>
<li><p>Temperature 0</p>
</li>
<li><p>Temperature 0.7</p>
</li>
<li><p>Temperature 1.2</p>
</li>
</ul>
<p><strong>Compare:</strong> Consistency, creativity, quality</p>
<p><strong>Exercise 3: Context Window Testing</strong></p>
<p>Take a long document (5,000+ words). Try:</p>
<ol>
<li><p>Summarizing the whole thing</p>
</li>
<li><p>Asking about information at the beginning</p>
</li>
<li><p>Asking about information in the middle</p>
</li>
<li><p>Asking about information at the end</p>
</li>
</ol>
<p><strong>Notice:</strong> Where accuracy is highest/lowest</p>
<p><strong>Exercise 4: Model Comparison</strong></p>
<p>Same prompt, three different models:</p>
<p><strong>Prompt:</strong> "Explain quantum entanglement to a high school student"</p>
<p>Test with:</p>
<ul>
<li><p>GPT-3.5 Turbo</p>
</li>
<li><p>Claude 3.5 Sonnet</p>
</li>
<li><p>GPT-4 (if available)</p>
</li>
</ul>
<p><strong>Compare:</strong> Clarity, accuracy, style, length</p>
<h3 id="heading-common-mistakes-to-avoid"><strong>Common Mistakes to Avoid</strong></h3>
<p><strong>❌ Mistake 1: Ignoring tokenization</strong><br /><em>"I'll just write naturally and not worry about it"</em><br /><strong>Problem:</strong> Hitting context limits unexpectedly, wasting tokens</p>
<p><strong>❌ Mistake 2: Using maximum context always</strong><br /><em>"I'll always use the longest context available"</em><br /><strong>Problem:</strong> Slower, more expensive, worse quality at high capacity</p>
<p><strong>❌ Mistake 3: Default parameters for everything</strong><br /><em>"I'll just use temperature 0.7 for all tasks"</em><br /><strong>Problem:</strong> Suboptimal results—code generation needs lower, creative needs higher</p>
<p><strong>❌ Mistake 4: Not testing different models</strong><br /><em>"GPT-4 is best, so I'll use it for everything"</em><br /><strong>Problem:</strong> Overpaying for simple tasks, missing specialized strengths</p>
<p><strong>❌ Mistake 5: Assuming determinism at temp &gt; 0</strong><br /><em>"I ran it once, so that's what it always does"</em><br /><strong>Problem:</strong> Inconsistent results surprise you in production</p>
<p><strong>❌ Mistake 6: Exceeding context without checking</strong><br /><em>"I'll just paste this whole document"</em><br /><strong>Problem:</strong> Truncated results, missed information, wasted tokens</p>
<p><strong>❌ Mistake 7: Treating all models the same</strong><br /><em>"They're all LLMs, so prompts should work identically"</em><br /><strong>Problem:</strong> Each model has quirks, optimal prompting differs</p>
<h2 id="heading-key-takeaways-what-you-must-remember"><strong>Key Takeaways: What You Must Remember</strong></h2>
<p><strong>🔹 LLMs predict text, they don't understand it</strong><br />This explains why they can be confidently wrong.</p>
<p><strong>🔹 Tokens ≠ words</strong><br />Budget and plan in tokens, not words.</p>
<p><strong>🔹 Context windows are your constraint</strong><br />Design prompts that fit. Put critical info at the start/end.</p>
<p><strong>🔹 Temperature controls creativity</strong><br />Low for facts, high for creativity.</p>
<p><strong>🔹 Different models have different strengths</strong><br />Choose based on task, not just "best overall."</p>
<p><strong>🔹 Parameters matter as much as the prompt</strong><br />The same prompt with different parameters produces different results.</p>
<p><strong>🔹 Bigger isn't always better</strong><br />A well-prompted smaller model beats a poorly-prompted larger one.</p>
<p><strong>🔹 Context quality &gt; context quantity</strong><br />200K tokens of irrelevant information &lt; 2K tokens of perfect context.</p>
<h2 id="heading-whats-next"><strong>What's Next</strong></h2>
<p>In <strong>Post #3: "Anatomy of an Effective Prompt"</strong>, we'll take everything you've learned about how models work and translate it into practical prompt construction:</p>
<ul>
<li><p>The essential components every prompt needs</p>
</li>
<li><p>Clear vs. vague instructions (with examples)</p>
</li>
<li><p>How to provide context effectively</p>
</li>
<li><p>Formatting outputs exactly as you want</p>
</li>
<li><p>Common beginner mistakes and how to avoid them</p>
</li>
<li><p>Your first prompt templates</p>
</li>
</ul>
<p><strong>Now you understand the engine. Next, you'll learn to drive it masterfully.</strong></p>
<h2 id="heading-resource-links"><strong>Resource Links</strong></h2>
<p><strong>Tokenizers:</strong></p>
<ul>
<li><p><a target="_blank" href="https://platform.openai.com/tokenizer">OpenAI Tokenizer</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/openai/tiktoken">TikToken GitHub</a></p>
</li>
</ul>
<p><strong>Model Documentation:</strong></p>
<ul>
<li><p><a target="_blank" href="https://platform.openai.com/docs/models">OpenAI Models</a></p>
</li>
<li><p><a target="_blank" href="https://docs.anthropic.com/claude/docs">Anthropic Claude</a></p>
</li>
<li><p><a target="_blank" href="https://ai.google.dev/gemini-api/docs">Google Gemini</a></p>
</li>
<li><p><a target="_blank" href="https://ai.meta.com/llama/">Meta LLaMA</a></p>
</li>
</ul>
<p><strong>Research Papers:</strong></p>
<ul>
<li><p>"Attention is All You Need" (Transformers)</p>
</li>
<li><p>"Lost in the Middle" (Context window study)</p>
</li>
<li><p>"Constitutional AI" (Anthropic's approach)</p>
</li>
</ul>
<h2 id="heading-your-assignment-before-post-3"><strong>Your Assignment Before Post #3</strong></h2>
<ol>
<li><p><strong>Experiment with at least two different models</strong> on the same task</p>
</li>
<li><p><strong>Try the same prompt with different temperatures</strong> (0, 0.7, 1.2)</p>
</li>
<li><p><strong>Check the token count</strong> of your typical prompts</p>
</li>
<li><p><strong>Test a long document</strong> against context limits</p>
</li>
<li><p><strong>Document your findings</strong>—what surprised you?</p>
</li>
</ol>
<p><strong>Share your discoveries in the comments!</strong> What worked? What didn't? What confused you?</p>
<p><strong>Next up:</strong> <em>Post #3 - "Anatomy of an Effective Prompt"</em></p>
<p><strong>You now understand the machine. Time to master the interface.</strong></p>
<p><em>Questions? Confused about anything? Drop a comment—I read and respond to all of them. This series only gets better with your input.</em></p>
<p><strong>Welcome to Level 2 of prompt engineering mastery. You're building something powerful.</strong></p>
]]></content:encoded></item><item><title><![CDATA[What is Prompt Engineering? A Complete Introduction]]></title><description><![CDATA[Welcome to the future of human-AI collaboration. If you're reading this in 2024-2025, you're witnessing a fundamental shift in how humans interact with machines—and prompt engineering is your passport to this new world.
The Definition: What Exactly I...]]></description><link>https://deviloper.in/what-is-prompt-engineering-complete-guide</link><guid isPermaLink="true">https://deviloper.in/what-is-prompt-engineering-complete-guide</guid><category><![CDATA[#PromptEngineering]]></category><category><![CDATA[AI]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[Artificial Intelligence]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Fri, 19 Dec 2025 12:34:45 GMT</pubDate><content:encoded><![CDATA[<p><em>Welcome to the future of human-AI collaboration. If you're reading this in 2024-2025, you're witnessing a fundamental shift in how humans interact with machines—and prompt engineering is your passport to this new world.</em></p>
<h2 id="heading-the-definition-what-exactly-is-prompt-engineering"><strong>The Definition: What Exactly IS Prompt Engineering?</strong></h2>
<p><strong>Prompt engineering is the art and science of crafting instructions that guide artificial intelligence systems to produce desired outputs.</strong> It's the bridge between human intention and machine capability, the translator between what you want and what AI can deliver.</p>
<p>But let's be more precise:</p>
<p><strong>Prompt Engineering (noun):</strong> <em>The systematic process of designing, testing, and optimizing input instructions (prompts) to elicit specific, accurate, and useful responses from large language models (LLMs) and other AI systems.</em></p>
<p>Think of it this way:</p>
<ul>
<li><p><strong>If traditional programming</strong> is writing explicit code that tells a computer exactly what to do, step by step...</p>
</li>
<li><p><strong>Prompt engineering</strong> is more like directing a highly intelligent but literal-minded assistant—you communicate your intent, provide context, and guide the AI toward the output you need.</p>
</li>
</ul>
<p><strong>The key difference?</strong> You're not writing in Python or Java. You're writing in natural language—English, Spanish, Chinese, or any human language. Yet the precision, testing, and optimization required rivals traditional programming.</p>
<h2 id="heading-why-this-matters-more-than-you-think"><strong>Why This Matters More Than You Think</strong></h2>
<p>Here's a statement that might sound hyperbolic but isn't: <strong>Prompt engineering is becoming one of the most valuable skills of the 21st century.</strong></p>
<p>Consider these facts:</p>
<p><strong>1. The Productivity Multiplier</strong></p>
<ul>
<li><p>A skilled prompt engineer can accomplish in 10 minutes what might take hours or days manually</p>
</li>
<li><p>Companies report <strong>30-80% productivity gains</strong> in tasks ranging from customer service to code generation</p>
</li>
<li><p><strong>The same AI model produces vastly different results</strong> depending on how you prompt it</p>
</li>
</ul>
<p><strong>2. The Democratization of Expertise</strong></p>
<ul>
<li><p>You don't need a computer science degree</p>
</li>
<li><p>You don't need to understand neural network architectures</p>
</li>
<li><p>You DO need to understand how to communicate effectively with AI</p>
</li>
</ul>
<p><strong>3. The Economic Impact</strong></p>
<ul>
<li><p>Prompt engineers at top companies earn <strong>$175,000-$335,000+</strong> annually</p>
</li>
<li><p>Every industry—from healthcare to entertainment—needs this skill</p>
</li>
<li><p>It's not replacing jobs; it's creating entirely new categories of work</p>
</li>
</ul>
<p><strong>The bottom line:</strong> In an AI-first world, <strong>your ability to communicate with AI systems is as fundamental as literacy itself.</strong></p>
<h2 id="heading-a-brief-history-how-we-got-here"><strong>A Brief History: How We Got Here</strong></h2>
<p>Understanding where prompt engineering came from helps you see where it's going.</p>
<h3 id="heading-phase-1-the-command-line-era-1950s-2000s"><strong>Phase 1: The Command Line Era (1950s-2000s)</strong></h3>
<ul>
<li><p><strong>Human-computer interaction was rigid:</strong> You typed exact commands</p>
</li>
<li><p><code>COPY A:\FILE.TXT C:\BACKUP\</code> worked. <code>copy the file</code> didn't.</p>
</li>
<li><p><strong>Zero tolerance for ambiguity</strong></p>
</li>
</ul>
<h3 id="heading-phase-2-search-and-keywords-1990s-2010s"><strong>Phase 2: Search and Keywords (1990s-2010s)</strong></h3>
<ul>
<li><p>Google taught us to think in keywords</p>
</li>
<li><p>"best pizza near me" vs. "What is the highest-rated pizza restaurant within 2 miles?"</p>
</li>
<li><p><strong>We learned to speak "search engine"</strong></p>
</li>
</ul>
<h3 id="heading-phase-3-the-dawn-of-neural-networks-2010-2017"><strong>Phase 3: The Dawn of Neural Networks (2010-2017)</strong></h3>
<ul>
<li><p>Deep learning models emerged but were specialized</p>
</li>
<li><p>Image recognition, speech-to-text, game-playing AI</p>
</li>
<li><p><strong>Still not conversational</strong></p>
</li>
</ul>
<h3 id="heading-phase-4-the-transformer-revolution-2017-2020"><strong>Phase 4: The Transformer Revolution (2017-2020)</strong></h3>
<ul>
<li><p><strong>June 2017:</strong> Google publishes "Attention is All You Need" (the Transformer paper)</p>
</li>
<li><p><strong>June 2018:</strong> OpenAI releases GPT (Generative Pre-trained Transformer)</p>
</li>
<li><p><strong>Breakthrough:</strong> Models could understand context across long passages</p>
</li>
<li><p><strong>But here's the catch:</strong> Early users discovered that <em>how you asked</em> dramatically changed <em>what you got</em></p>
</li>
</ul>
<h3 id="heading-phase-5-gpt-3-and-the-birth-of-prompt-engineering-2020-2022"><strong>Phase 5: GPT-3 and the Birth of Prompt Engineering (2020-2022)</strong></h3>
<ul>
<li><p><strong>Summer 2020:</strong> OpenAI releases GPT-3 with 175 billion parameters</p>
</li>
<li><p>Researchers and early adopters noticed: <strong>Slight prompt variations → Wildly different outputs</strong></p>
</li>
<li><p>The term "prompt engineering" gains traction</p>
</li>
<li><p><strong>Key insight:</strong> These models respond to examples, structure, and specific phrasing</p>
</li>
</ul>
<p>Example from early GPT-3 days:</p>
<p>❌ <strong>Bad Prompt:</strong> "Write about climate change"<br /><em>(Result: Generic, unfocused text)</em></p>
<p>✅ <strong>Better Prompt:</strong> "You are a climate scientist writing for policymakers. Explain the three most urgent climate actions governments should take in 2024, with specific data and examples. Format as: Action | Rationale | Expected Impact"<br /><em>(Result: Structured, authoritative, actionable)</em></p>
<h3 id="heading-phase-6-the-prompt-engineering-explosion-2022-present"><strong>Phase 6: The Prompt Engineering Explosion (2022-Present)</strong></h3>
<ul>
<li><p><strong>November 2022:</strong> ChatGPT launches and reaches 100M users in 2 months</p>
</li>
<li><p>Millions of people suddenly need to learn prompting</p>
</li>
<li><p><strong>Key developments:</strong></p>
<ul>
<li><p>Chain-of-thought prompting (Wei et al., 2022)</p>
</li>
<li><p>Instruction-following models (InstructGPT, Claude)</p>
</li>
<li><p>Multimodal models (GPT-4V, Gemini)</p>
</li>
<li><p>Specialized prompting frameworks emerge</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-phase-7-professional-practice-2023-present"><strong>Phase 7: Professional Practice (2023-Present)</strong></h3>
<ul>
<li><p>Companies hire dedicated prompt engineers</p>
</li>
<li><p>Academic research explodes (200+ papers in 2023 alone)</p>
</li>
<li><p>Tools and platforms emerge (LangChain, PromptBase, etc.)</p>
</li>
<li><p><strong>Prompt engineering becomes a formal discipline</strong></p>
</li>
</ul>
<p><strong>The trajectory is clear:</strong> We went from "type exact commands" → "think in keywords" → <strong>"architect precise instructions in natural language."</strong></p>
<h2 id="heading-why-prompt-engineering-matters-in-the-ai-era"><strong>Why Prompt Engineering Matters in the AI Era</strong></h2>
<p>Let's get concrete about why this skill is essential <em>right now</em>.</p>
<h3 id="heading-1-the-ai-capability-gap"><strong>1. The AI Capability Gap</strong></h3>
<p><strong>The Problem:</strong> Modern AI can do amazing things—but only if you know how to ask.</p>
<p><strong>Real Example:</strong></p>
<ul>
<li><p>Generic prompt: <em>"Help me with marketing"</em> → Vague, generic advice</p>
</li>
<li><p>Engineered prompt: <em>"I'm launching a B2B SaaS product for healthcare compliance. Create a 90-day content marketing strategy targeting hospital CIOs, including: (1) content themes by week, (2) distribution channels with rationale, (3) KPIs to track, (4) budget allocation across channels. Present in table format."</em> → Detailed, actionable strategy</p>
</li>
</ul>
<p><strong>The same AI. Dramatically different value.</strong></p>
<h3 id="heading-2-the-quality-cost-equation"><strong>2. The Quality-Cost Equation</strong></h3>
<p><strong>Important Reality:</strong> AI API costs are based on tokens (roughly words)</p>
<ul>
<li><p>Poor prompt: Uses 500 tokens, gets mediocre result, needs 3 follow-ups = 2,000 tokens total</p>
</li>
<li><p>Engineered prompt: Uses 200 tokens, gets excellent result on first try = 200 tokens total</p>
</li>
</ul>
<p><strong>That's a 10x efficiency difference.</strong> At scale, this means thousands or millions in savings.</p>
<h3 id="heading-3-the-accuracy-safety-imperative"><strong>3. The Accuracy-Safety Imperative</strong></h3>
<p><strong>AI systems can:</strong></p>
<ul>
<li><p>Generate false information (hallucinations)</p>
</li>
<li><p>Exhibit biases</p>
</li>
<li><p>Miss crucial nuances</p>
</li>
<li><p>Produce inconsistent results</p>
</li>
</ul>
<p><strong>Prompt engineering mitigates these issues</strong> through:</p>
<ul>
<li><p>Explicit constraints and guidelines</p>
</li>
<li><p>Request for citations and verification</p>
</li>
<li><p>Step-by-step reasoning (showing work)</p>
</li>
<li><p>Format specifications that enable validation</p>
</li>
</ul>
<p><strong>In high-stakes domains (healthcare, legal, financial)</strong>, good prompting isn't optional—it's essential.</p>
<h3 id="heading-4-the-competitive-advantage"><strong>4. The Competitive Advantage</strong></h3>
<p><strong>Here's the uncomfortable truth:</strong> Your competitors are using AI. The question is: <strong>Are they using it well?</strong></p>
<p>Companies with strong prompt engineering capabilities:</p>
<ul>
<li><p><strong>Launch products faster</strong> (rapid prototyping with AI)</p>
</li>
<li><p><strong>Scale operations efficiently</strong> (AI handles routine work)</p>
</li>
<li><p><strong>Innovate constantly</strong> (AI as ideation partner)</p>
</li>
<li><p><strong>Reduce operational costs</strong> (automation with accuracy)</p>
</li>
</ul>
<p><strong>The gap between companies that prompt well and those that don't will widen dramatically.</strong></p>
<h3 id="heading-5-the-human-ai-collaboration-future"><strong>5. The Human-AI Collaboration Future</strong></h3>
<p><strong>This isn't about replacement—it's about augmentation.</strong></p>
<p>The future workplace has:</p>
<ul>
<li><p><strong>Humans:</strong> Strategy, creativity, empathy, judgment, domain expertise</p>
</li>
<li><p><strong>AI:</strong> Pattern recognition, information synthesis, rapid generation, 24/7 availability</p>
</li>
<li><p><strong>Prompt Engineering:</strong> The connector between the two</p>
</li>
</ul>
<p><strong>Your value isn't just what you know—it's how well you can leverage AI to amplify what you know.</strong></p>
<h2 id="heading-key-terminology-glossary"><strong>Key Terminology Glossary</strong></h2>
<p>Let's build your foundational vocabulary. <strong>Master these terms—they're the language of prompt engineering.</strong></p>
<h3 id="heading-core-concepts"><strong>Core Concepts</strong></h3>
<p><strong>🔹 Prompt</strong><br />The input instruction or query you provide to an AI model. Can be a question, command, or complex multi-part instruction.</p>
<ul>
<li><em>Example: "Explain quantum computing to a 10-year-old"</em></li>
</ul>
<p><strong>🔹 Completion / Response / Output</strong><br />What the AI generates based on your prompt.</p>
<p><strong>🔹 Token</strong><br />The basic unit of text for LLMs. Roughly 1 token ≈ 4 characters or 0.75 words in English.</p>
<ul>
<li><p><em>"Hello world!" = 3 tokens</em></p>
</li>
<li><p><strong>Why it matters:</strong> Models have token limits; API costs are token-based</p>
</li>
</ul>
<p><strong>🔹 Context Window</strong><br />The maximum amount of text (in tokens) a model can process at once—including both your prompt and its response.</p>
<ul>
<li><p>GPT-4: 8K-128K tokens</p>
</li>
<li><p>Claude 3: Up to 200K tokens</p>
</li>
<li><p><strong>Practical impact:</strong> Determines how much information you can include</p>
</li>
</ul>
<p><strong>🔹 Temperature</strong><br />A setting (0.0 to 2.0) controlling randomness in AI responses:</p>
<ul>
<li><p><strong>Low (0-0.3):</strong> Focused, deterministic, consistent → Use for factual tasks</p>
</li>
<li><p><strong>Medium (0.7-1.0):</strong> Balanced creativity → General use</p>
</li>
<li><p><strong>High (1.0+):</strong> Creative, varied, unpredictable → Use for brainstorming</p>
</li>
</ul>
<p><strong>🔹 Top-p (Nucleus Sampling)</strong><br />Alternative to temperature; controls diversity by sampling from top percentage of likely next tokens.</p>
<ul>
<li><p><strong>Low (0.1):</strong> Very focused</p>
</li>
<li><p><strong>High (0.9):</strong> More diverse</p>
</li>
</ul>
<h3 id="heading-prompting-techniques"><strong>Prompting Techniques</strong></h3>
<p><strong>🔹 Zero-Shot Prompting</strong><br />Asking the AI to perform a task without any examples.</p>
<ul>
<li><em>"Translate this to French: [text]"</em></li>
</ul>
<p><strong>🔹 Few-Shot Prompting</strong><br />Providing examples in your prompt to guide the AI's response format and style.</p>
<pre><code class="lang-sql">UnknownExample 1: [input] → [output]
Example 2: [input] → [output]
Now <span class="hljs-keyword">do</span>: [your <span class="hljs-keyword">input</span>]
</code></pre>
<p><strong>🔹 Chain-of-Thought (CoT)</strong><br />Prompting the AI to show its reasoning step-by-step before answering.</p>
<ul>
<li><p><em>"Let's solve this step by step:"</em></p>
</li>
<li><p><strong>Dramatically improves reasoning accuracy</strong></p>
</li>
</ul>
<p><strong>🔹 System Prompt / System Message</strong><br />A special instruction that sets the AI's behavior for an entire conversation (not all models expose this).</p>
<ul>
<li><em>"You are a helpful Python programming tutor..."</em></li>
</ul>
<p><strong>🔹 Role Prompting</strong><br />Assigning the AI a specific role or persona.</p>
<ul>
<li><em>"Act as a senior financial analyst..."</em></li>
</ul>
<p><strong>🔹 Prompt Chaining</strong><br />Using the output of one prompt as input to another, breaking complex tasks into steps.</p>
<h3 id="heading-model-behaviors"><strong>Model Behaviors</strong></h3>
<p><strong>🔹 Hallucination</strong><br />When an AI generates false information with confidence.</p>
<ul>
<li><p><strong>Critical to understand:</strong> AI doesn't "know" things; it predicts plausible text</p>
</li>
<li><p><strong>Mitigation:</strong> Request citations, use RAG, verify outputs</p>
</li>
</ul>
<p><strong>🔹 Bias</strong><br />Systematic errors reflecting biases in training data (gender, race, cultural, etc.)</p>
<ul>
<li><strong>Your responsibility:</strong> Prompt carefully, validate outputs</li>
</ul>
<p><strong>🔹 Instruction Following</strong><br />How well a model adheres to explicit directions in your prompt.</p>
<ul>
<li><strong>Varies by model:</strong> GPT-4, Claude, and Gemini are specifically trained for this</li>
</ul>
<p><strong>🔹 Steerability</strong><br />The degree to which you can control a model's output through prompting.</p>
<h3 id="heading-advanced-concepts"><strong>Advanced Concepts</strong></h3>
<p><strong>🔹 Fine-Tuning</strong><br />Training a model further on specific data (beyond prompting).</p>
<ul>
<li><strong>When prompting isn't enough:</strong> Highly specialized domains, consistent formatting needs</li>
</ul>
<p><strong>🔹 Retrieval-Augmented Generation (RAG)</strong><br />Combining LLMs with external knowledge sources (databases, documents).</p>
<ul>
<li><strong>The solution to:</strong> Knowledge cutoffs, proprietary information, hallucinations</li>
</ul>
<p><strong>🔹 Embeddings</strong><br />Numerical representations of text that capture semantic meaning.</p>
<ul>
<li><strong>Used for:</strong> Semantic search, finding similar content, RAG systems</li>
</ul>
<p><strong>🔹 Tokens per Minute (TPM) / Requests per Minute (RPM)</strong><br />Rate limits on API usage.</p>
<p><strong>🔹 Latency</strong><br />Time between sending a prompt and receiving the complete response.</p>
<h3 id="heading-evaluation-metrics"><strong>Evaluation Metrics</strong></h3>
<p><strong>🔹 Accuracy</strong><br />How often outputs are factually correct.</p>
<p><strong>🔹 Relevance</strong><br />How well outputs address the actual query.</p>
<p><strong>🔹 Coherence</strong><br />How logically consistent and well-structured outputs are.</p>
<p><strong>🔹 Fluency</strong><br />How natural and grammatically correct the text is.</p>
<p><strong>🔹 Consistency</strong><br />How similar outputs are for repeated identical prompts (when temperature is low).</p>
<h2 id="heading-the-critical-mindset-shift"><strong>The Critical Mindset Shift</strong></h2>
<p>Before we go further, you need to understand something fundamental:</p>
<h3 id="heading-ai-doesnt-understandit-predicts"><strong>AI Doesn't "Understand"—It Predicts</strong></h3>
<p><strong>This is crucial:</strong> When you prompt an LLM, you're not accessing a database of facts. You're activating a massive statistical model that predicts the most likely next word, then the next, then the next.</p>
<p><strong>What this means:</strong></p>
<ul>
<li><p>✅ It can sound confident while being completely wrong</p>
</li>
<li><p>✅ The same prompt can yield different results (at higher temperatures)</p>
</li>
<li><p>✅ It has no real-time information (unless connected to search/APIs)</p>
</li>
<li><p>✅ It can't truly "think"—it generates plausible continuations</p>
</li>
</ul>
<p><strong>Why this matters for prompt engineering:</strong><br />Your job is to <strong>set up the statistical probability space</strong> so that useful, accurate outputs are most likely. You do this through:</p>
<ul>
<li><p>Clear instructions</p>
</li>
<li><p>Relevant context</p>
</li>
<li><p>Examples and patterns</p>
</li>
<li><p>Constraints and formats</p>
</li>
<li><p>Verification steps</p>
</li>
</ul>
<p><strong>Think of yourself as a conductor:</strong> The orchestra (AI) has incredible capability, but the quality of the symphony depends entirely on how you direct it.</p>
<h2 id="heading-the-three-pillars-of-effective-prompting"><strong>The Three Pillars of Effective Prompting</strong></h2>
<p>As you go through this series, everything traces back to these three principles:</p>
<h3 id="heading-1-clarity"><strong>1. Clarity</strong></h3>
<p><strong>Ambiguous prompts = Unpredictable outputs</strong></p>
<p>Compare:</p>
<ul>
<li><p>❌ <em>"Write about cars"</em></p>
</li>
<li><p>✅ <em>"Write a 500-word article explaining the three main differences between hybrid and electric vehicles for consumers considering their next car purchase. Include cost, environmental impact, and practical considerations."</em></p>
</li>
</ul>
<h3 id="heading-2-context"><strong>2. Context</strong></h3>
<p><strong>Give the AI what it needs to understand your situation</strong></p>
<p>Compare:</p>
<ul>
<li><p>❌ <em>"How should I respond?"</em></p>
</li>
<li><p>✅ <em>"I'm a startup founder. A major client just asked for a 50% discount or they'll switch to a competitor. We'd lose $100K annually but can't afford the discount. Draft a diplomatic email response that: (1) acknowledges their concerns, (2) offers alternative value-adds instead of discounts, (3) keeps the relationship positive."</em></p>
</li>
</ul>
<h3 id="heading-3-constraints"><strong>3. Constraints</strong></h3>
<p><strong>Define the boundaries of acceptable outputs</strong></p>
<p>Compare:</p>
<ul>
<li><p>❌ <em>"Give me marketing ideas"</em></p>
</li>
<li><p>✅ <em>"Generate 5 marketing campaign ideas for a B2B cybersecurity product. Requirements: (1) Budget under $10K, (2) Focus on LinkedIn and industry conferences, (3) Target IT directors at mid-size healthcare companies, (4) Measurable within 60 days. Format: Campaign name | Tactic | Budget | Expected metric."</em></p>
</li>
</ul>
<p><strong>Master these three, and you're 80% of the way to effective prompting.</strong></p>
<h2 id="heading-your-first-exercise"><strong>Your First Exercise</strong></h2>
<p>Let's make this practical immediately. Try this exercise:</p>
<p><strong>Task:</strong> Get an AI to write you a professional email.</p>
<p><strong>Attempt 1 (Poor prompt):</strong></p>
<pre><code class="lang-sql">UnknownWrite an email
</code></pre>
<p><strong>Attempt 2 (Better):</strong></p>
<pre><code class="lang-sql">UnknownWrite a professional email declining a job offer
</code></pre>
<p><strong>Attempt 3 (Good):</strong></p>
<pre><code class="lang-sql">UnknownWrite a professional email declining a job offer. I'm declining because I accepted another position. Keep it brief, polite, and leave the door open for future opportunities.
</code></pre>
<p><strong>Attempt 4 (Excellent):</strong></p>
<pre><code class="lang-sql">UnknownContext: I interviewed for a Senior Product Manager role at TechCorp. They offered me the position, but I've accepted a role at a different company that aligns better <span class="hljs-keyword">with</span> my career goals.

Task: Draft a professional email declining their offer.

Requirements:
- Tone: Grateful <span class="hljs-keyword">and</span> professional, <span class="hljs-keyword">not</span> apologetic
- <span class="hljs-keyword">Length</span>: <span class="hljs-number">3</span><span class="hljs-number">-4</span> <span class="hljs-keyword">short</span> paragraphs
- <span class="hljs-keyword">Include</span>: (<span class="hljs-number">1</span>) gratitude <span class="hljs-keyword">for</span> the offer, (<span class="hljs-number">2</span>) <span class="hljs-keyword">clear</span> decision <span class="hljs-keyword">to</span> decline, (<span class="hljs-number">3</span>) brief reason (accepted another opportunity), (<span class="hljs-number">4</span>) expression <span class="hljs-keyword">of</span> interest <span class="hljs-keyword">in</span> staying connected
- Avoid: <span class="hljs-keyword">Over</span>-explaining, leaving ambiguity about decision

Recipient: Jennifer Martinez, Hiring Manager
</code></pre>
<p><strong>Notice the progression?</strong> Each version gives the AI more to work with—more clarity, context, and constraints.</p>
<p><strong>Your assignment:</strong> Try all four versions with your AI of choice. Compare the outputs. Feel the difference.</p>
<h2 id="heading-common-misconceptions-to-abandon-now"><strong>Common Misconceptions to Abandon Now</strong></h2>
<p><strong>❌ Misconception 1:</strong> "AI is magic—it should just know what I want"<br /><strong>✅ Reality:</strong> AI is powerful pattern matching. Garbage in = garbage out.</p>
<p><strong>❌ Misconception 2:</strong> "Longer prompts are always better"<br /><strong>✅ Reality:</strong> Precision beats length. Concise, well-structured prompts often outperform verbose ones.</p>
<p><strong>❌ Misconception 3:</strong> "Prompt engineering is just for technical people"<br /><strong>✅ Reality:</strong> It's a communication skill. Writers, marketers, and domain experts often excel.</p>
<p><strong>❌ Misconception 4:</strong> "Once I find a good prompt, I'm done"<br /><strong>✅ Reality:</strong> Prompts need iteration, testing, and maintenance as models evolve.</p>
<p><strong>❌ Misconception 5:</strong> "AI will make my job obsolete"<br /><strong>✅ Reality:</strong> People who use AI well will replace people who don't. The tool amplifies; it doesn't replace judgment.</p>
<h2 id="heading-the-ethical-dimension"><strong>The Ethical Dimension</strong></h2>
<p>Before we conclude, let's address the elephant in the room: <strong>With great prompting power comes great responsibility.</strong></p>
<p><strong>Key ethical considerations:</strong></p>
<p><strong>1. Transparency</strong></p>
<ul>
<li><p>Be clear when content is AI-generated</p>
</li>
<li><p>Don't misrepresent AI outputs as human work (where it matters)</p>
</li>
</ul>
<p><strong>2. Verification</strong></p>
<ul>
<li><p>Always fact-check AI outputs for important decisions</p>
</li>
<li><p>Don't blindly trust—AI makes mistakes</p>
</li>
</ul>
<p><strong>3. Bias Awareness</strong></p>
<ul>
<li><p>Recognize that AI inherits biases from training data</p>
</li>
<li><p>Test your prompts for potential biased outputs</p>
</li>
<li><p>Actively prompt for balanced perspectives</p>
</li>
</ul>
<p><strong>4. Privacy</strong></p>
<ul>
<li><p>Never input confidential, private, or sensitive information into public AI systems</p>
</li>
<li><p>Understand data retention policies</p>
</li>
</ul>
<p><strong>5. Attribution</strong></p>
<ul>
<li><p>When AI helps create something, consider appropriate attribution</p>
</li>
<li><p>Respect intellectual property laws (evolving rapidly)</p>
</li>
</ul>
<p><strong>6. Impact</strong></p>
<ul>
<li><p>Consider the downstream effects of scaled AI automation</p>
</li>
<li><p>Use the technology to augment human capability, not exploit vulnerabilities</p>
</li>
</ul>
<p><strong>We'll dedicate an entire post to this, but start thinking about it now.</strong></p>
<h2 id="heading-what-makes-a-prompt-engineer"><strong>What Makes a Prompt Engineer?</strong></h2>
<p>You might be wondering: "Am I cut out for this?"</p>
<p><strong>Here's what actually predicts success:</strong></p>
<p>✅ <strong>Curiosity:</strong> Willingness to experiment and iterate<br />✅ <strong>Clarity of Thought:</strong> Ability to articulate what you want precisely<br />✅ <strong>Domain Knowledge:</strong> Understanding the subject matter you're prompting about<br />✅ <strong>Pattern Recognition:</strong> Noticing what works and what doesn't<br />✅ <strong>Patience:</strong> Testing and refining until you get it right</p>
<p><strong>Not required:</strong></p>
<ul>
<li><p>❌ Computer science degree</p>
</li>
<li><p>❌ Math expertise</p>
</li>
<li><p>❌ Programming background (though it helps)</p>
</li>
</ul>
<p><strong>The best prompt engineers I know come from diverse backgrounds:</strong> journalism, teaching, product management, psychology, law, creative writing.</p>
<p><strong>The common thread?</strong> They're excellent communicators who think systematically.</p>
<h2 id="heading-your-prompt-engineering-journey-starts-now"><strong>Your Prompt Engineering Journey Starts Now</strong></h2>
<p>Here's what to do after reading this post:</p>
<h3 id="heading-immediate-actions-today"><strong>Immediate Actions (Today):</strong></h3>
<ol>
<li><p><strong>Open an AI assistant</strong> (ChatGPT, Claude, Gemini, etc.)</p>
</li>
<li><p><strong>Try this exercise:</strong></p>
<ul>
<li><p>Ask: <em>"What is photosynthesis?"</em></p>
</li>
<li><p>Then ask: <em>"Explain photosynthesis to three audiences: (1) a 5th grader, (2) a high school biology student, (3) a university botany professor. Use analogies for the 5th grader, technical accuracy for the student, and research-level detail for the professor."</em></p>
</li>
<li><p>Compare the results</p>
</li>
</ul>
</li>
<li><p><strong>Observe the difference:</strong> That's prompt engineering in action</p>
</li>
</ol>
<h3 id="heading-this-week"><strong>This Week:</strong></h3>
<ul>
<li><p>Experiment with 3-5 different tasks (writing, analysis, coding, etc.)</p>
</li>
<li><p>For each, compare a simple prompt vs. a detailed, well-structured prompt</p>
</li>
<li><p>Document what works and what doesn't</p>
</li>
<li><p>Start building your own prompt library</p>
</li>
</ul>
<h3 id="heading-before-next-post"><strong>Before Next Post:</strong></h3>
<ul>
<li><p>Choose one regular work task you perform</p>
</li>
<li><p>Draft a prompt that could help automate or improve it</p>
</li>
<li><p>Test and iterate on that prompt</p>
</li>
<li><p>Note: What worked? What didn't? What surprised you?</p>
</li>
</ul>
<h2 id="heading-looking-ahead-post-2-preview"><strong>Looking Ahead: Post #2 Preview</strong></h2>
<p>In our next post, <strong>"Understanding Large Language Models: What You Need to Know,"</strong> we'll pull back the curtain on how these systems actually work:</p>
<ul>
<li><p>The transformer architecture (simplified, no math required)</p>
</li>
<li><p>Why tokenization matters more than you think</p>
</li>
<li><p>What "training data cutoff" really means</p>
</li>
<li><p>How different models compare (GPT vs. Claude vs. Gemini vs. open-source)</p>
</li>
<li><p>What parameters like temperature and top-p actually do</p>
</li>
<li><p>The practical limitations you need to work around</p>
</li>
</ul>
<p><strong>You don't need to become an ML engineer, but understanding the engine helps you drive better.</strong></p>
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p><strong>Prompt engineering is not a fad.</strong> It's not a temporary skill that will be automated away next year. It's the fundamental interface between human intelligence and artificial intelligence.</p>
<p><strong>As models improve, prompt engineering becomes MORE important, not less.</strong> Better models can do more—but only if you know how to harness that capability.</p>
<p><strong>This series is your comprehensive guide.</strong> We're going deep—deeper than any other resource available. By the time you finish all 42 posts, you'll have:</p>
<ul>
<li><p>✅ Mastery of every major prompting technique</p>
</li>
<li><p>✅ A library of tested, reusable prompts</p>
</li>
<li><p>✅ Understanding of tools and frameworks</p>
</li>
<li><p>✅ Real-world application experience</p>
</li>
<li><p>✅ Professional-grade skills for your career</p>
</li>
</ul>
<p><strong>But here's the secret:</strong> You don't need to wait until the end. Every post will give you immediately applicable skills. Start using what you learn right away.</p>
<p><strong>The future belongs to those who can collaborate effectively with AI. You've just taken the first step.</strong></p>
<h2 id="heading-join-the-conversation"><strong>Join the Conversation</strong></h2>
<p><strong>What's your biggest prompt engineering question?</strong> Drop it in the comments—I'm using reader questions to shape upcoming posts.</p>
<p><strong>Share your first prompt experiment:</strong> Post a before/after example of improving a prompt. Let's learn together.</p>
<p><strong>Subscribe for the series:</strong> Don't miss a post. New content every [your schedule].</p>
<p><strong>Next up:</strong> <em>Post #2 - "Understanding Large Language Models: What You Need to Know"</em></p>
<p><strong>The journey to prompt engineering mastery starts with a single prompt. Make yours count.</strong></p>
<p><em>Have you discovered a prompt pattern that works particularly well? Or hit a wall with something that should work but doesn't? Share your experience—this series is built on real-world practice, not just theory.</em></p>
<p><strong>Welcome to the cutting edge. Let's build something remarkable together.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Anatomy of a Great Prompt: Getting the Most Out of Generative AI]]></title><description><![CDATA[In this post, we’ll walk through the anatomy of a great prompt, illustrate every step with vivid examples, and even peek under the hood to see what happens technically when you hit “send.” By the end, you’ll be able to craft prompts that unlock the f...]]></description><link>https://deviloper.in/anatomy-of-a-great-prompt-getting-the-most-out-of-generative-ai</link><guid isPermaLink="true">https://deviloper.in/anatomy-of-a-great-prompt-getting-the-most-out-of-generative-ai</guid><category><![CDATA[generative ai]]></category><category><![CDATA[#PromptEngineering]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Wed, 17 Dec 2025 13:01:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765976364783/a3481adc-46ef-4375-9fea-7a8eff4184f0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this post, we’ll walk through the anatomy of a great prompt, illustrate every step with vivid examples, and even peek under the hood to see what happens technically when you hit “send.” By the end, you’ll be able to craft prompts that unlock the full power of Generative AI—whether you’re a curious beginner or a seasoned techie.</p>
<h2 id="heading-think-of-a-prompt-like-a-recipe">Think of a Prompt Like a Recipe</h2>
<p>Imagine you’re texting a friend to make dinner. You could say:</p>
<ul>
<li><p>“Make me dinner.”</p>
</li>
<li><p>OR “Make me spaghetti with tomato sauce, extra garlic, and no cheese.”</p>
</li>
</ul>
<p>The second version gives your friend all the details to make it just right. That’s what a great prompt does for AI: it hands over the full recipe, not just a wish list.</p>
<h2 id="heading-the-four-essential-ingredients-of-a-great-prompt">The Four Essential Ingredients of a Great Prompt</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765976347415/fdf8efa0-0038-4515-bcf4-18c69eb8800d.png" alt class="image--center mx-auto" /></p>
<p>Let’s slice this recipe even finer. Whether you want a poem, a summary, or business insight from GenAI, these four ingredients are key:</p>
<h3 id="heading-1-clarity">1. <strong>Clarity</strong></h3>
<p>Be crystal-clear about what you want. Avoid fuzzy, open-ended requests.</p>
<p><strong>Examples:</strong></p>
<ul>
<li><p>Not clear: “Write something about birthdays.”</p>
</li>
<li><p>Clear: “Write a funny birthday poem for a 12-year-old who loves dinosaurs.”</p>
</li>
</ul>
<p>Clarity gets you results that actually match your needs.</p>
<h3 id="heading-2-context">2. <strong>Context</strong></h3>
<p>Provide background, purpose, or audience to help the AI dial in its answers.</p>
<p><strong>Examples:</strong></p>
<ul>
<li><p>Weak: “Suggest a gift.”</p>
</li>
<li><p>Strong: “Suggest a birthday gift for my grandmother who enjoys gardening and lives in Florida.”</p>
</li>
</ul>
<p>Context works like a north star, pointing the AI in the right direction.</p>
<h3 id="heading-3-instructions">3. <strong>Instructions</strong></h3>
<p>Tell AI what <em>form</em> you want: a list, bullet points, a story, or even a tone (formal, playful, technical, etc.).</p>
<p><strong>Examples:</strong></p>
<ul>
<li><p>Weak: “Explain rain.”</p>
</li>
<li><p>Strong: “Explain how rain works to a 7-year-old, using simple words and a short story.”</p>
</li>
</ul>
<p>The more you guide, the closer the results land.</p>
<h3 id="heading-4-constraints">4. <strong>Constraints</strong></h3>
<p>Set boundaries for length, amount, or format so you don’t get too much or too little.</p>
<p><strong>Examples:</strong></p>
<ul>
<li><p>Broad: “List movies.”</p>
</li>
<li><p>Precise: “List three adventure movies suitable for family movie night.”</p>
</li>
</ul>
<p>Constraints help AI respect your time and attention.</p>
<h2 id="heading-bad-vs-good-prompts-see-the-difference">Bad vs. Good Prompts: See the Difference</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Vague Prompt</td><td>Great Prompt</td></tr>
</thead>
<tbody>
<tr>
<td>Tell me about Paris.</td><td>Write a two-paragraph travel guide about Paris for first-time visitors, focusing on food and famous sights.</td></tr>
<tr>
<td>Make a joke.</td><td>Make a lighthearted joke about cats that a 10-year-old can understand.</td></tr>
<tr>
<td>Help with my resume.</td><td>Give me three bullet points to improve my resume for an entry-level marketing job.</td></tr>
</tbody>
</table>
</div><p>Notice how the upgraded prompts include clarity, context, instructions, and constraints. The AI now knows exactly where you want to go and how to help you get there.</p>
<h2 id="heading-real-life-analogy-giving-great-directions">Real-Life Analogy: Giving Great Directions</h2>
<p>Picture a friend trying to find your house:</p>
<ul>
<li><p>“Go that way until you see a tree.”<br />  (Confusion ahead!)</p>
</li>
<li><p>“Drive straight for two blocks, turn left at the bakery, and my blue door’s on the right.”<br />  (Success!)</p>
</li>
</ul>
<p>Great prompts are like great directions—they’re friendly, focused, and easy to follow.</p>
<h2 id="heading-upgrade-challenge-turn-weak-prompts-into-strong-ones">Upgrade Challenge: Turn Weak Prompts Into Strong Ones</h2>
<p>Try transforming these:</p>
<ol>
<li><p>“Write an email.”<br /> → “Write a short, polite email to my professor requesting a deadline extension because I was sick.”</p>
</li>
<li><p>“What’s a good dinner?”<br /> → “What’s a quick vegetarian dinner recipe with just five ingredients?”</p>
</li>
<li><p>“Explain photosynthesis.”<br /> → “Explain photosynthesis to someone with no science background using a simple, step-by-step analogy.”</p>
</li>
</ol>
<p>Practicing this skill makes prompting feel like second nature.</p>
<h2 id="heading-beyond-basics-the-prompts-that-power-professionals">Beyond Basics: The Prompts that Power Professionals</h2>
<p>You can use these principles anywhere:</p>
<ul>
<li><p><strong>Business</strong>: “Create a SWOT analysis for a small online toy store looking to expand internationally. Limit each section to two bullet points.”</p>
</li>
<li><p><strong>Education</strong>: “Summarize Einstein’s theory of relativity in under 150 words for a high school science newsletter.”</p>
</li>
<li><p><strong>Personal</strong>: “List three creative, affordable date night ideas for couples in New York City.”</p>
</li>
</ul>
<p>These detailed prompts set clear expectations, saving you time and frustration.</p>
<h2 id="heading-key-takeaways">Key Takeaways</h2>
<ul>
<li><p>The clearer, more contextual, and better-structured your prompt, the better and more relevant the result.</p>
</li>
<li><p>Great prompts save you time, frustration, and guesswork, making GenAI more useful and delightful—no matter your goal or industry.</p>
</li>
<li><p>Under the hood, your instructions are taken apart, analyzed, and rebuilt by a transformer model trained to spot patterns, follow rules, and respect every detail you provide.</p>
</li>
</ul>
<p>Clear prompts aren’t just about saving time—they’re about unlocking the true magic of AI: getting answers that feel written just for you.</p>
]]></content:encoded></item><item><title><![CDATA[Introduction to GenAI and Prompt Engineering]]></title><description><![CDATA[What is GenAI and Why Does Prompting Matter?
If you’ve ever wondered how people interact with AI tools, or why some folks seem to get exactly what they want from tools like ChatGPT while others receive confusing or generic responses, this article is ...]]></description><link>https://deviloper.in/introduction-to-genai-and-prompt-engineering</link><guid isPermaLink="true">https://deviloper.in/introduction-to-genai-and-prompt-engineering</guid><category><![CDATA[generative ai]]></category><category><![CDATA[#PromptEngineering]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Wed, 17 Dec 2025 12:40:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765975063824/7f2b1657-08ed-4bc9-a224-2573f7d8f78f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-genai-and-why-does-prompting-matter">What is GenAI and Why Does Prompting Matter?</h2>
<p>If you’ve ever wondered how people interact with AI tools, or why some folks seem to get exactly what they want from tools like ChatGPT while others receive confusing or generic responses, this article is for you. We’ll unveil the basics of Generative AI (GenAI), teach you about prompts, and set you up for success in the art of communicating with AI—no matter your background.</p>
<h2 id="heading-what-is-genai-breaking-down-the-buzzword">What is GenAI? Breaking Down the Buzzword</h2>
<p>At its core, <strong>GenAI</strong> stands for Generative Artificial Intelligence. It’s not just another tech industry fad—it signifies a quantum leap in how computers can support us, work with us, and even create for us.</p>
<h3 id="heading-genai-in-plain-english">GenAI in Plain English</h3>
<p>Think of GenAI as a highly intelligent assistant, one who doesn’t just fetch information, but who actually creates content. Whether you want it to <strong>write a story, summarize a report, sketch a logo, or compose a melody</strong>, GenAI listens to (or rather, “reads”) your instructions and produces something new. It doesn’t simply retrieve facts; it uses patterns learned from mountains of information to generate original outputs.</p>
<h3 id="heading-real-life-example-the-coffee-shop-scenario">Real-Life Example: The Coffee Shop Scenario</h3>
<p>Let’s put GenAI into a real-world situation you already know. Picture yourself at your favorite coffee shop:</p>
<ul>
<li><p><strong>You say:</strong> “One cappuccino, please.”</p>
<ul>
<li>The barista crafts your drink exactly to your taste.</li>
</ul>
</li>
</ul>
<p>But what if you’re less specific?</p>
<ul>
<li><p><strong>You say:</strong> “Give me something hot.”</p>
<ul>
<li>The barista could hand you tea, cocoa, an espresso… it’s anyone’s guess!</li>
</ul>
</li>
</ul>
<p>This is exactly how GenAI works: the clarity and detail of your request (or prompt) directly influences the success and accuracy of what you get back.</p>
<h2 id="heading-what-is-a-prompt-your-magic-wand-for-genai">What is a Prompt? Your “Magic Wand” for GenAI</h2>
<p>A <strong>prompt</strong> is what you say or write to “instruct” the AI. It’s your command, your wish, your ask—wrapped up in a short phrase, a question, or even a detailed scenario.</p>
<ul>
<li><p>Prompts can be simple:</p>
<ul>
<li>“Summarize this article.”</li>
</ul>
</li>
<li><p>Or complex:</p>
<ul>
<li>“Write a friendly email to my team, summarizing our project progress, highlighting Sarah’s achievement, and suggesting a virtual celebration.”</li>
</ul>
</li>
</ul>
<h3 id="heading-analogy-the-genie-in-the-lamp">Analogy: The Genie in the Lamp</h3>
<p>Recall fairy tales about genies who grant wishes:</p>
<ul>
<li><p><strong>Vague wish:</strong> “I want to be happy.”</p>
</li>
<li><p><strong>Specific wish:</strong> “I want to have a picnic in Central Park this Saturday with my friends, sunny weather, and no bugs.”</p>
</li>
</ul>
<p>The second wish is much more likely to lead to the outcome you desire. Similarly, your <em>prompt</em> is how you “wish” something out of GenAI—the clearer, the better.</p>
<h2 id="heading-why-should-you-care-about-prompts">Why Should You Care About Prompts?</h2>
<p>Learning to use the right prompt is like finding the magic key that unlocks GenAI’s full potential. Good prompts can help you:</p>
<ul>
<li><p>Produce emails, reports, social posts, or creative writing.</p>
</li>
<li><p>Analyze data, summarize complex information, or brainstorm fresh ideas.</p>
</li>
<li><p>Automate repetitive tasks, create images or code snippets, and more.</p>
</li>
</ul>
<h3 id="heading-everyday-example-email-to-your-boss">Everyday Example: Email to Your Boss</h3>
<p>Let’s make this practical:</p>
<ul>
<li><p><strong>Poor prompt:</strong></p>
<ul>
<li><p>“Help me.”</p>
<ul>
<li>(GenAI: “How can I help you?”)</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Better prompt:</strong></p>
<ul>
<li><p>“Please write me an email to my boss explaining I’m sick and can’t come to work.”</p>
<ul>
<li>(GenAI: Produces a clear, professional message ready to send!)</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><strong>Takeaway:</strong> The results improve dramatically when you put in a little thought, just like how your experience at a restaurant improves when you’re clear with your order.</p>
<h2 id="heading-more-real-world-examples-across-fields">More Real-World Examples Across Fields</h2>
<p>Let’s broaden this with practical examples:</p>
<ul>
<li><p><strong>Marketing:</strong></p>
<ul>
<li><em>“Create three catchy headlines for a spring sale on winter clothing.”</em></li>
</ul>
</li>
<li><p><strong>Education:</strong></p>
<ul>
<li><em>“Explain the theory of relativity in simple terms for a 10-year-old.”</em></li>
</ul>
</li>
<li><p><strong>Healthcare:</strong></p>
<ul>
<li><em>“Summarize this patient’s medical history in three sentences.”</em></li>
</ul>
</li>
<li><p><strong>Design:</strong></p>
<ul>
<li><em>“Generate a logo concept for a modern vegan café, using green and white.”</em></li>
</ul>
</li>
<li><p><strong>Programming:</strong></p>
<ul>
<li><em>“Write a Python script that sorts a list of names alphabetically.”</em></li>
</ul>
</li>
</ul>
<p>In each case, the <em>clarity, context, and specificity</em> of your prompt drives the quality of GenAI's response.</p>
<h2 id="heading-behind-the-scenes-how-does-genai-actually-work">Behind the Scenes: How Does GenAI Actually Work?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1765974744472/e06106d9-5453-4a2c-a6da-aef580b0e4bd.png" alt class="image--center mx-auto" /></p>
<p>While Generative AI can feel magical—responding instantly and intelligently to your prompts—what’s happening under the hood is a sophisticated blend of computer science, mathematics, and large-scale data engineering. Below, I’ll break down each key stage and component, using clear explanations and practical analogies to help you visualize what’s really going on during a typical GenAI interaction.</p>
<h3 id="heading-1-pre-training-building-the-ai-brain">1. Pre-Training: Building the AI Brain</h3>
<p><strong>Massive Data Ingestion:</strong></p>
<ul>
<li><p>GenAI models like GPT-4 are “pre-trained” on enormous datasets scraped from the internet, books, articles, code, images, and more.</p>
</li>
<li><p>Data is tokenized (broken into small chunks), and each token is mapped to a numerical value that the model can manipulate.</p>
</li>
</ul>
<p><strong>Self-Supervised Learning:</strong></p>
<ul>
<li><p>During training, the model is shown billions of sequences (sentences, code, etc.) and learns to predict the next token in a sequence.</p>
</li>
<li><p>No human labels are required here—the model learns by trying to guess missing words or next words, seeing where it went wrong, and updating itself.</p>
</li>
</ul>
<p><strong>Emergence of Patterns:</strong></p>
<ul>
<li><p>Over time, the neural network captures intricate relationships in language: grammar rules, contextual clues, cultural references, and even some logic.</p>
</li>
<li><p>Weights and biases in the model are gradually tuned so that it gets better and better at predicting what comes next in any given context.</p>
</li>
</ul>
<h3 id="heading-2-prompt-processing-interpreting-your-input">2. Prompt Processing: Interpreting Your Input</h3>
<p><strong>Tokenization:</strong></p>
<ul>
<li><p>When you type a prompt, it’s instantly split into tokens (words, subwords, or even characters, depending on the model’s design).</p>
</li>
<li><p>Example: “Write a summary of Hamlet” → [‘Write’, ‘a’, ‘summary’, ‘of’, ‘Ham’, ‘let’]</p>
</li>
</ul>
<p><strong>Embeddings:</strong></p>
<ul>
<li><p>Each token is converted into a high-dimensional vector (embedding) that captures its semantic meaning.</p>
</li>
<li><p>These vectors provide the foundation for all subsequent computations.</p>
</li>
</ul>
<p><strong>Neural Network Layers:</strong></p>
<ul>
<li><p>The sequence of embeddings flows through dozens or even hundreds of transformer layers.</p>
</li>
<li><p>Each layer adjusts its understanding by considering the position and context of each token within the entire prompt (“attention mechanisms”).</p>
</li>
<li><p>These attention maps allow the AI to weigh meaning and relationships, just as a human would focus on key words in a sentence for overall meaning.</p>
</li>
</ul>
<h3 id="heading-3-token-prediction-generating-the-output">3. Token Prediction: Generating the Output</h3>
<p><strong>Stepwise Generation:</strong></p>
<ul>
<li><p>The model doesn’t generate your answer all at once. Instead, it predicts one token at a time.</p>
</li>
<li><p>After each token is predicted, the new token is appended to the prompt, and the model recalculates the probabilities for what should come next.</p>
</li>
</ul>
<p><strong>Probability Distribution:</strong></p>
<ul>
<li><p>For every step, the AI constructs a probability distribution over its entire vocabulary, assigning a likelihood to each possible next token.</p>
</li>
<li><p>The final output is selected via “sampling”—potentially influenced by parameters like temperature.</p>
</li>
</ul>
<p><strong>Temperature (Creativity Control):</strong></p>
<ul>
<li><p>Lower temperature = safer, more repetitive/accurate answers</p>
</li>
<li><p>Higher temperature = more creative and surprising outputs, but sometimes less precise.</p>
</li>
</ul>
<p><strong>Example:</strong><br />If you prompt, “Once upon a time in a faraway kingdom,” the model scores all likely next words: ‘there’, ‘a’, ‘lived’, etc.—and picks the most probable, or sometimes a slightly less probable one if sampling is used.</p>
<h3 id="heading-4-advanced-features-context-window-memory-and-roles">4. Advanced Features: Context Window, Memory, and Roles</h3>
<p><strong>Context Window:</strong></p>
<ul>
<li><p>Each model has a “context window” (e.g., 8,000 or even 128,000 tokens), which is the maximum amount of input and output it can consider at once.</p>
</li>
<li><p>Input outside this window is forgotten, so precise or recent information is prioritized.</p>
</li>
</ul>
<p><strong>Conversation Memory:</strong></p>
<ul>
<li><p>Some advanced models can hold onto information across turns in a conversation using session memory.</p>
</li>
<li><p>This allows for coherent responses even when context is spread over several exchanges.</p>
</li>
</ul>
<p><strong>System and Role Prompts:</strong></p>
<ul>
<li><p>You can set context or roles (e.g., “Act as a financial analyst”) to steer the AI’s tone, expertise, or style throughout the conversation.</p>
</li>
<li><p>These system prompts act as invisible guide rails, subtly influencing the AI’s output.</p>
</li>
</ul>
<h3 id="heading-5-end-to-end-example">5. End-to-End Example</h3>
<p>Let’s tie it all together with a simple walk-through:</p>
<ol>
<li><p><strong>Prompt Submitted:</strong> “Explain quantum computing to a 10-year-old.”</p>
</li>
<li><p><strong>Tokenization:</strong> Prompt broken into manageable tokens.</p>
</li>
<li><p><strong>Embedding &amp; Encoding:</strong> Each token mapped into a vector; processed by the transformer network.</p>
</li>
<li><p><strong>Attention &amp; Context:</strong> Model pays special attention to key concepts (“quantum computing”, “10-year-old”).</p>
</li>
<li><p><strong>Stepwise Prediction:</strong> AI predicts, generates, and appends each new token until the sentence is finished.</p>
</li>
<li><p><strong>Output Displayed:</strong> The generated text appears on your screen—reading like an expert’s explanation, but tailored to a child’s understanding.</p>
</li>
</ol>
<h3 id="heading-technical-example">Technical Example</h3>
<p>Suppose you prompt:<br /><em>“Write a SQL query to select the top 5 products by sales from orders table.”</em></p>
<p>Behind the curtain:</p>
<ul>
<li><p>The AI parses the request, recognizes technical keywords (SQL, select, top, sales, orders table), and assembles a likely correct statement from millions of similar examples in its training data.</p>
</li>
<li><p>It will output:</p>
<pre><code class="lang-sql">  <span class="hljs-keyword">SELECT</span> product_id, <span class="hljs-keyword">SUM</span>(sales) <span class="hljs-keyword">AS</span> total_sales
  <span class="hljs-keyword">FROM</span> orders
  <span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> product_id
  <span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span> total_sales <span class="hljs-keyword">DESC</span>
  <span class="hljs-keyword">LIMIT</span> <span class="hljs-number">5</span>;
</code></pre>
</li>
<li><p>You didn’t have to code from scratch—the prompt did the heavy lifting!</p>
</li>
</ul>
<h2 id="heading-tips-for-writing-great-prompts-and-avoiding-frustration">Tips for Writing Great Prompts (and Avoiding Frustration)</h2>
<ul>
<li><p><strong>Be Clear and Direct:</strong> State exactly what you want, with as much relevant detail as necessary.</p>
</li>
<li><p><strong>Define Role or Tone:</strong> If you want a friendly email, say so. If you want a technical explanation, specify the audience.</p>
</li>
<li><p><strong>Use Examples:</strong> If appropriate, share a template or sample within your prompt.</p>
</li>
<li><p><strong>Iterate:</strong> If you don’t get what you want the first time, tweak your prompt and try again—like clarifying your order at a café.</p>
</li>
</ul>
<h2 id="heading-why-prompt-engineering-is-the-new-literacy">Why Prompt Engineering is the “New Literacy”</h2>
<p>In many ways, <em>prompt engineering</em> is becoming a vital skill—like reading or basic computer literacy. The more you practice, the more powerful and precise your interactions with AI will become.</p>
<ul>
<li><p>Students can leverage AI for studying and homework help.</p>
</li>
<li><p>Professionals speed up research, drafting, analytics, and more.</p>
</li>
<li><p>Creatives expand their toolkit, generating fresh ideas at the push of a button.</p>
</li>
</ul>
<p>Prompt engineering isn’t about “talking to robots”—it’s about shaping technology so it truly works for you.</p>
]]></content:encoded></item><item><title><![CDATA[Why Every Developer Should Track Their Daily Routine]]></title><description><![CDATA[Let’s be honest—being a developer isn’t just about writing code. It’s about solving problems, dealing with burnout, handling meetings, chasing deadlines, and finding time to learn, debug, and ship. But in the middle of this chaos, one simple habit ca...]]></description><link>https://deviloper.in/why-every-developer-should-track-their-daily-routine</link><guid isPermaLink="true">https://deviloper.in/why-every-developer-should-track-their-daily-routine</guid><category><![CDATA[daily task tracker]]></category><category><![CDATA[journal]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Sun, 01 Jun 2025 17:17:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748798070818/77a54ca3-8019-40a0-a6ed-3bdf0ea71154.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s be honest—being a developer isn’t just about writing code. It’s about solving problems, dealing with burnout, handling meetings, chasing deadlines, and finding time to <strong>learn, debug, and ship</strong>. But in the middle of this chaos, one simple habit can drastically improve your productivity, motivation, and mental clarity:</p>
<p><strong>Tracking your daily routine.</strong></p>
<h3 id="heading-why-track-as-a-developer"><strong>🧠 Why Track as a Developer?</strong></h3>
<p>Here’s what happens when you start logging your day as a coder:</p>
<ul>
<li><p>✅ <strong>Clarity</strong>: You get a clear view of how you spend your hours—coding, meetings, learning, procrastinating.</p>
</li>
<li><p>🚀 <strong>Progress</strong>: You see real growth, even in small wins like solving a bug or learning a new tool.</p>
</li>
<li><p>🧘 <strong>Balance</strong>: You catch when you’re burning out or skipping breaks.</p>
</li>
<li><p>🛠️ <strong>Reflection</strong>: You spot what works and what doesn’t, like that late-night debugging session that ruined your sleep.</p>
</li>
</ul>
<p>But let’s face it, digital tools often feel too rigid or too distracting.</p>
<p>The <strong>“404: Motivation Not Found” Journal</strong> comes in here.</p>
<h2 id="heading-whats-the-404-journal"><strong>📝 What’s the 404 Journal?</strong></h2>
<p><strong>“404: Motivation Not Found”</strong> isn’t just a clever name. It’s a physical (or printable) developer-friendly journal crafted to keep your coding life in check. It’s made <em>by a developer, for developers.</em></p>
<h3 id="heading-features-include"><strong>🔍 Features Include:</strong></h3>
<ul>
<li><p>✅ <strong>Daily Tracker</strong> – Log your tasks, code time, mood, bugs fixed, and breakthroughs.</p>
</li>
<li><p>📆 <strong>Weekly Recap</strong> – Review what worked and what didn’t.</p>
</li>
<li><p>📈 <strong>Productivity Graphs</strong> – Visual sections to plot your effort vs. results.</p>
</li>
<li><p>☕ <strong>Coffee/Water/Mental Break Tracker</strong> – Because we know caffeine powers Git commits.</p>
</li>
<li><p>💬 <strong>Coding Memes + Quotes</strong> – A sprinkle of humour to get you through that 500th error.</p>
</li>
<li><p>🧠 <strong>Brain Dump Area</strong> – Jot ideas, bugs, or random thoughts so you don’t forget them mid-debug.</p>
</li>
<li><p>💻 <strong>Cheat Sheets</strong> – Quick tips for Git, VS Code, Linux, Python, Regex, and more.</p>
</li>
</ul>
<h2 id="heading-whos-it-for"><strong>👨‍💻 Who’s It For?</strong></h2>
<p>Whether you’re:</p>
<ul>
<li><p>A <strong>student</strong> learning to code,</p>
</li>
<li><p>A <strong>junior dev</strong> juggling bugs and tutorials,</p>
</li>
<li><p>A <strong>senior engineer</strong> who wants more structure,</p>
</li>
<li><p>Or a <strong>freelancer</strong> managing your time and projects,</p>
</li>
</ul>
<p>The <strong>404 Journal</strong> is your non-digital accountability partner.</p>
<h2 id="heading-where-to-get-it"><strong>📦 Where to Get It</strong></h2>
<p>You can grab your copy of the <strong>“404: Motivation Not Found” Journal</strong> here:</p>
<p>🔗 <a class="post-section-overview" href="#"><strong>Buy Now on Pothi.com</strong></a> (<a target="_blank" href="https://store.pothi.com/book/rahul-dubey-404-motivation-no-found/">404-motivation-no-found</a>)</p>
<p>🔗 <a class="post-section-overview" href="#"><strong>Buy Now on Amazon.com</strong></a> (<a target="_blank" href="https://www.amazon.com/dp/B0FB36HT6D">404-motivation-no-found</a>)</p>
<p>Available in:</p>
<ul>
<li><strong>Paperback (6×9)</strong> – Perfect to carry around</li>
</ul>
<hr />
<h2 id="heading-final-thoughts"><strong>🎯 Final Thoughts</strong></h2>
<p>You don’t need another app.</p>
<p>You need <strong>a system that sticks</strong>.</p>
<p>Writing things down—on real paper—keeps you focused, grounded, and aware of how awesome (or unmotivated) your day was. So the next time you hit a mental block or feel stuck in a loop of debugging and burnout…</p>
<p>Grab your <strong>404 journal</strong>, flip to a fresh page, and take back control of your dev routine—one log at a time.</p>
<hr />
<p><strong>Stay productive. Stay sane. Keep coding.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Advanced JSON Diff Checker in Python: An In-Depth Guide]]></title><description><![CDATA[Introduction:
JSON (JavaScript Object Notation) is a widely used data interchange format, especially in web development and configuration management. When dealing with JSON data, it's common to encounter scenarios where you need to compare two JSON o...]]></description><link>https://deviloper.in/advanced-json-diff-checker-in-python-an-in-depth-guide</link><guid isPermaLink="true">https://deviloper.in/advanced-json-diff-checker-in-python-an-in-depth-guide</guid><category><![CDATA[json]]></category><category><![CDATA[Python]]></category><category><![CDATA[difference checker]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Wed, 04 Oct 2023 01:33:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696383085314/2b5173ca-22bc-464d-bd98-1b7cc892a068.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Introduction</strong>:</p>
<p>JSON (JavaScript Object Notation) is a widely used data interchange format, especially in web development and configuration management. When dealing with JSON data, it's common to encounter scenarios where you need to compare two JSON objects to find differences. Whether you're tracking changes in configurations, validating data updates, or debugging code, having a reliable JSON diff checker at your disposal can be a lifesaver.</p>
<p>In this comprehensive guide, we'll dive into the world of JSON diff checking and explore how to build an advanced JSON diff checker in Python. We'll leverage the power of the <code>deepdiff</code> library to perform precise comparisons and the <code>termcolor</code> library to provide a visually appealing and user-friendly output.</p>
<p><strong>Prerequisites</strong>:</p>
<p>Before we get started, there are a few prerequisites you should have in place:</p>
<ul>
<li><p>Basic knowledge of Python programming.</p>
</li>
<li><p>Python installed on your system.</p>
</li>
<li><p>Familiarity with installing Python libraries using <code>pip</code>.</p>
</li>
<li><p>Access to a code editor or integrated development environment (IDE).</p>
</li>
</ul>
<p><strong>Section 2: Choosing the Right Tools</strong></p>
<p>In our journey to build an advanced JSON diff checker in Python, the first step is selecting the right tools for the job. Fortunately, the Python ecosystem offers two essential libraries that will make our task significantly easier: <code>deepdiff</code> and <code>termcolor</code>.</p>
<p><strong>deepdiff: Precise JSON Comparison</strong></p>
<p>The <code>deepdiff</code> library is a powerful tool for comparing complex data structures in Python, including JSON objects. It provides a comprehensive set of functionalities to identify differences between two data structures, making it perfect for our JSON diff checker.</p>
<p>To install <code>deepdiff</code>, open your terminal or command prompt and run the following command:</p>
<pre><code class="lang-bash">pip install deepdiff
</code></pre>
<p>We'll explore how to use <code>deepdiff</code> in the upcoming sections to perform precise JSON comparisons.</p>
<p><strong>termcolor: Enhancing Output</strong></p>
<p>While the raw differences between JSON objects are valuable, presenting these differences in a human-readable and visually appealing format enhances the user experience. This is where the <code>termcolor</code> library comes into play.</p>
<p><code>termcolor</code> allows us to add colors and styling to our console output. By color-coding the differences between JSON objects, we can quickly identify added, removed, and modified keys and values, making it easier to understand and act upon the differences.</p>
<p>To install <code>termcolor</code>, use the following command:</p>
<pre><code class="lang-bash">pip install termcolor
</code></pre>
<p><strong>Section 3: Preparing Sample JSON Data</strong></p>
<p>To effectively test and demonstrate our advanced JSON diff checker, we need sample JSON data with various complexities. In this section, we'll create two JSON objects with nested structures, representing before-and-after states of data.</p>
<p>Let's start with our first JSON object, which we'll call <code>json_obj1</code>:</p>
<pre><code class="lang-python">json_obj1 = {
    <span class="hljs-string">"name"</span>: <span class="hljs-string">"John"</span>,
    <span class="hljs-string">"age"</span>: <span class="hljs-number">30</span>,
    <span class="hljs-string">"city"</span>: <span class="hljs-string">"New York"</span>,
    <span class="hljs-string">"hobbies"</span>: [<span class="hljs-string">"reading"</span>, <span class="hljs-string">"hiking"</span>]
}
</code></pre>
<p>In <code>json_obj1</code>, we have a person's details, including their name, age, city, and a list of hobbies. This JSON structure is straightforward and serves as our initial data state.</p>
<p>Now, let's create the second JSON object, <code>json_obj2</code>, which represents a modified version of the data:</p>
<pre><code class="lang-python">json_obj2 = {
    <span class="hljs-string">"name"</span>: <span class="hljs-string">"Jane"</span>,
    <span class="hljs-string">"city"</span>: <span class="hljs-string">"San Francisco"</span>,
    <span class="hljs-string">"country"</span>: <span class="hljs-string">"USA"</span>,
    <span class="hljs-string">"hobbies"</span>: [<span class="hljs-string">"hiking"</span>, <span class="hljs-string">"painting"</span>]
}
</code></pre>
<p>In <code>json_obj2</code>, we see several changes compared to <code>json_obj1</code>. Jane's name has replaced John's, the city is different, a new "country" key has been added, and her hobbies have been updated.</p>
<p>These two JSON objects will serve as our testing data to demonstrate how our JSON diff checker detects differences. In the upcoming sections, we'll use <code>deepdiff</code> and <code>termcolor</code> to perform the comparison and present the results in a user-friendly format.</p>
<p>Feel free to use these sample JSON objects for testing or replace them with your own data as needed.</p>
<p><strong>Section 4: Comparing JSON Objects</strong></p>
<p>Now that we have our sample JSON data prepared (<code>json_obj1</code> and <code>json_obj2</code>), it's time to dive into the core of our advanced JSON diff checker: comparing these JSON objects and identifying their differences.</p>
<p>We'll leverage the <code>deepdiff</code> library, which provides powerful tools for comparing complex data structures, including JSON objects. Here's how we can perform the comparison:</p>
<p><strong>Step 1: Installing deepdiff (if not already installed)</strong></p>
<p>If you haven't installed the <code>deepdiff</code> library yet, you can do so by running:</p>
<pre><code class="lang-bash">pip install deepdiff
</code></pre>
<p><strong>Step 2: Importing deepdiff</strong></p>
<p>In your Python script, make sure to import <code>deepdiff</code>:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> deepdiff <span class="hljs-keyword">import</span> DeepDiff
</code></pre>
<p><strong>Step 3: Comparing JSON Objects</strong></p>
<p>To compare our JSON objects (<code>json_obj1</code> and <code>json_obj2</code>), we can use the following code:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Find differences between the two JSON objects</span>
diff = DeepDiff(json_obj1, json_obj2, ignore_order=<span class="hljs-literal">True</span>)
</code></pre>
<p>In this code:</p>
<ul>
<li><p><code>DeepDiff</code> is used to perform the comparison between <code>json_obj1</code> and <code>json_obj2</code>.</p>
</li>
<li><p><code>ignore_order=True</code> ensures that the order of items in lists or dictionaries won't affect the comparison result, making it more suitable for JSON comparisons.</p>
</li>
</ul>
<p><strong>Step 4: Categorizing Differences</strong></p>
<p>The <code>diff</code> variable now contains the differences between the two JSON objects. These differences are categorized into various types, such as "dictionary_item_added," "dictionary_item_removed," and "values_changed." We can use these categories to identify the nature of the differences.</p>
<p><strong>Section 5: Colorizing the Output</strong></p>
<p>In the previous section, we successfully compared our JSON objects using the <code>deepdiff</code> library and categorized the differences. However, a plain textual representation of these differences can be challenging to read and understand, especially when dealing with complex JSON data.</p>
<p>To enhance the user experience and make our JSON diff checker more user-friendly, we can add color-coded formatting to the output. This will make it easier to identify added, removed, and modified keys and values at a glance.</p>
<p>For this purpose, we'll use the <code>termcolor</code> library, which allows us to add colors and styling to console text.</p>
<p><strong>Step 1: Installing termcolor (if not already installed)</strong></p>
<p>Ensure you have the <code>termcolor</code> library installed by running:</p>
<pre><code class="lang-bash">pip install termcolor
</code></pre>
<p><strong>Step 2: Importing termcolor</strong></p>
<p>In your Python script, import the <code>colored</code> function from the <code>termcolor</code> library:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> termcolor <span class="hljs-keyword">import</span> colored
</code></pre>
<p><strong>Step 3: Color-Coding the Output</strong></p>
<p>Now that we have <code>termcolor</code> imported, we can use the <code>colored</code> function to add colors to our JSON diff checker output. Here's an example of how to apply color coding to the differences:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Print the differences with color-coded output</span>
<span class="hljs-keyword">for</span> category, changes <span class="hljs-keyword">in</span> diff.items():
    <span class="hljs-keyword">if</span> category == <span class="hljs-string">'dictionary_item_added'</span>:
        print(colored(<span class="hljs-string">"Added Keys:"</span>, <span class="hljs-string">'green'</span>))
    <span class="hljs-keyword">elif</span> category == <span class="hljs-string">'dictionary_item_removed'</span>:
        print(colored(<span class="hljs-string">"Removed Keys:"</span>, <span class="hljs-string">'red'</span>))
    <span class="hljs-keyword">elif</span> category == <span class="hljs-string">'values_changed'</span>:
        print(colored(<span class="hljs-string">"Modified Values:"</span>, <span class="hljs-string">'yellow'</span>))

    <span class="hljs-keyword">for</span> change <span class="hljs-keyword">in</span> changes:
        print(<span class="hljs-string">f"<span class="hljs-subst">{change}</span>: <span class="hljs-subst">{changes[change]}</span>"</span>)
</code></pre>
<p>In this code:</p>
<ul>
<li><p>We iterate through the categories of differences (<code>dictionary_item_added</code>, <code>dictionary_item_removed</code>, and <code>values_changed</code>).</p>
</li>
<li><p>We use the <code>colored</code> function to apply colors to the category labels for added keys (green), removed keys (red), and modified values (yellow).</p>
</li>
</ul>
<p>By color-coding the output, we make it much easier to spot differences and understand the nature of the changes between JSON objects.</p>
<p><strong>Section 6: Full Code Example</strong></p>
<p>In this section, we'll provide a complete Python code example that demonstrates our advanced JSON diff checker in action. We'll incorporate everything we've discussed so far, including comparing JSON objects using <code>deepdiff</code> and enhancing the output with color-coded formatting using the <code>termcolor</code> library.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> deepdiff <span class="hljs-keyword">import</span> DeepDiff
<span class="hljs-keyword">from</span> termcolor <span class="hljs-keyword">import</span> colored

<span class="hljs-comment"># Sample JSON objects</span>
json_obj1 = {
    <span class="hljs-string">"name"</span>: <span class="hljs-string">"John"</span>,
    <span class="hljs-string">"age"</span>: <span class="hljs-number">30</span>,
    <span class="hljs-string">"city"</span>: <span class="hljs-string">"New York"</span>,
    <span class="hljs-string">"hobbies"</span>: [<span class="hljs-string">"reading"</span>, <span class="hljs-string">"hiking"</span>]
}

json_obj2 = {
    <span class="hljs-string">"name"</span>: <span class="hljs-string">"Jane"</span>,
    <span class="hljs-string">"city"</span>: <span class="hljs-string">"San Francisco"</span>,
    <span class="hljs-string">"country"</span>: <span class="hljs-string">"USA"</span>,
    <span class="hljs-string">"hobbies"</span>: [<span class="hljs-string">"hiking"</span>, <span class="hljs-string">"painting"</span>]
}

<span class="hljs-comment"># Compare JSON objects</span>
diff = DeepDiff(json_obj1, json_obj2, ignore_order=<span class="hljs-literal">True</span>)

<span class="hljs-comment"># Function to colorize the output</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">colorize</span>(<span class="hljs-params">text, color</span>):</span>
    <span class="hljs-keyword">return</span> colored(text, color, attrs=[<span class="hljs-string">"bold"</span>])

<span class="hljs-comment"># Print the differences with color-coded output</span>
<span class="hljs-keyword">for</span> category, changes <span class="hljs-keyword">in</span> diff.items():
    <span class="hljs-keyword">if</span> category == <span class="hljs-string">'dictionary_item_added'</span>:
        print(colorize(<span class="hljs-string">"Added Keys:"</span>, <span class="hljs-string">'green'</span>))
    <span class="hljs-keyword">elif</span> category == <span class="hljs-string">'dictionary_item_removed'</span>:
        print(colorize(<span class="hljs-string">"Removed Keys:"</span>, <span class="hljs-string">'red'</span>))
    <span class="hljs-keyword">elif</span> category == <span class="hljs-string">'values_changed'</span>:
        print(colorize(<span class="hljs-string">"Modified Values:"</span>, <span class="hljs-string">'yellow'</span>))

    <span class="hljs-keyword">for</span> change <span class="hljs-keyword">in</span> changes:
        print(<span class="hljs-string">f"<span class="hljs-subst">{change}</span>: <span class="hljs-subst">{changes[change]}</span>"</span>)

<span class="hljs-comment"># Print the full JSON objects for reference</span>
print(colorize(<span class="hljs-string">"\nJSON Object 1:"</span>, <span class="hljs-string">'cyan'</span>))
print(json.dumps(json_obj1, indent=<span class="hljs-number">4</span>))

print(colorize(<span class="hljs-string">"\nJSON Object 2:"</span>, <span class="hljs-string">'cyan'</span>))
print(json.dumps(json_obj2, indent=<span class="hljs-number">4</span>))
</code></pre>
<p>In this complete code example:</p>
<ul>
<li><p>We define our sample JSON objects (<code>json_obj1</code> and <code>json_obj2</code>) that represent the data before and after changes.</p>
</li>
<li><p>We use <code>DeepDiff</code> to compare these JSON objects and store the differences in the <code>diff</code> variable.</p>
</li>
<li><p>We create a <code>colorize</code> function to apply color coding to our output using <code>termcolor</code>.</p>
</li>
<li><p>We iterate through the categories of differences (added keys, removed keys, and modified values) and print them with color-coded formatting.</p>
</li>
<li><p>Finally, we print the full JSON objects for reference.</p>
</li>
</ul>
<p>With this code, you have a fully functional JSON diff checker that not only identifies differences but also presents them in a visually appealing and user-friendly manner.</p>
<p><strong>Section 7: Testing the JSON Diff Checker</strong></p>
<p>Now that we have our advanced JSON diff checker code in place, it's time to put it to the test. We'll use our sample JSON objects, <code>json_obj1</code> and <code>json_obj2</code>, to see how our checker performs and how it helps us identify differences between them.</p>
<p><strong>Step 1: Run the Code</strong></p>
<p>Copy the entire code example provided in the previous section and run it in your Python environment. Make sure you have both the <code>deepdiff</code> and <code>termcolor</code> libraries installed.</p>
<p><strong>Step 2: Review the Output</strong></p>
<p>When you run the code, you'll see the JSON diff checker in action. It will categorize the differences between <code>json_obj1</code> and <code>json_obj2</code> into added keys, removed keys, and modified values. The output will be color-coded for easy identification:</p>
<ul>
<li><p>Added Keys (Green)</p>
</li>
<li><p>Removed Keys (Red)</p>
</li>
<li><p>Modified Values (Yellow)</p>
</li>
</ul>
<p>For example, if "name" has changed from "John" to "Jane," you'll see a yellow-highlighted entry indicating the modified value.</p>
<p><strong>Step 3: Interpret the Results</strong></p>
<p>Review the output carefully, and you'll notice how our JSON diff checker precisely identifies the differences between the two JSON objects. This can be particularly valuable when working with configuration files, tracking changes in data, or debugging code.</p>
<p>Feel free to modify <code>json_obj1</code> and <code>json_obj2</code> to create your own test scenarios. This way, you can see how the JSON diff checker handles various types of changes and updates.</p>
<p><strong>Section 8: Conclusion and Further Enhancements</strong></p>
<p>Congratulations! You've successfully created an advanced JSON diff checker in Python using the <code>deepdiff</code> and <code>termcolor</code> libraries. You've learned how to compare JSON objects with precision and present the results in a user-friendly and visually appealing format.</p>
<p>As you explore the possibilities and applications of your JSON diff checker, consider the following:</p>
<p><strong>Further Enhancements</strong>:</p>
<ol>
<li><p><strong>Interactive Input</strong>: Allow users to input JSON data interactively, making your tool more versatile.</p>
</li>
<li><p><strong>Customization</strong>: Provide options for users to customize the color-coding and output format according to their preferences.</p>
</li>
<li><p><strong>Batch Processing</strong>: Extend your tool to compare multiple JSON files or objects in batch mode.</p>
</li>
<li><p><strong>File Input/Output</strong>: Implement features to read JSON data from files and save the results to files for future reference.</p>
</li>
</ol>
<p><strong>Use Cases</strong>:</p>
<ol>
<li><p><strong>Configuration Management</strong>: Use your JSON diff checker to track changes in configuration files, ensuring that modifications are properly documented.</p>
</li>
<li><p><strong>Data Validation</strong>: Validate changes in data structures, such as database schemas or API responses, to ensure data integrity.</p>
</li>
<li><p><strong>Version Control</strong>: Incorporate your checker into version control workflows to detect and manage JSON changes in software projects.</p>
</li>
<li><p><strong>Debugging</strong>: Debugging complex JSON data can be easier when you can quickly spot differences between expected and actual data.</p>
</li>
</ol>
<p>By continuously improving your JSON diff checker and exploring various use cases, you'll find that it becomes a valuable tool in your arsenal, simplifying tasks that involve comparing and analyzing JSON data.</p>
<p>Thank you for joining us on this journey to create an advanced JSON diff checker in Python. We hope this guide has been informative and that your new tool serves you well in your coding and data management adventures.</p>
<p>Feel free to leave any feedback or questions in the comments, and happy coding!</p>
<hr />
<p>This concludes our guide on building an advanced JSON diff checker in Python. You now have a versatile tool that can help you in various programming and data management tasks. If you have any further questions or need assistance with any aspect of this project, please don't hesitate to ask.</p>
]]></content:encoded></item><item><title><![CDATA[Enhancing Stock Technical Analysis with Machine Learning in Python]]></title><description><![CDATA[Introduction
Stock technical analysis is a critical component of the investment decision-making process. Traditionally, analysts rely on historical price charts and technical indicators to predict future stock price movements. However, with the advan...]]></description><link>https://deviloper.in/enhancing-stock-technical-analysis-with-machine-learning-in-python</link><guid isPermaLink="true">https://deviloper.in/enhancing-stock-technical-analysis-with-machine-learning-in-python</guid><category><![CDATA[ML]]></category><category><![CDATA[Python]]></category><category><![CDATA[stockmarket]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 14 Sep 2023 21:24:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Wb63zqJ5gnE/upload/ce6bf6e8c127cee236db43e46293856b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Introduction</p>
<p>Stock technical analysis is a critical component of the investment decision-making process. Traditionally, analysts rely on historical price charts and technical indicators to predict future stock price movements. However, with the advancement of technology, we can enhance this process by incorporating machine learning techniques into our analysis. In this article, we will explore how to use Python and machine learning to improve stock technical analysis.</p>
<p><strong>1. Data Collection</strong></p>
<p>The first step in this journey is to gather historical stock price data. Python offers several libraries and APIs for this purpose. One popular choice is the <code>yfinance</code> library, which allows you to fetch historical stock data from Yahoo Finance. For example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> yfinance <span class="hljs-keyword">as</span> yf
ticker = <span class="hljs-string">"AAPL"</span>
data = yf.download(ticker, start=<span class="hljs-string">"2020-01-01"</span>, end=<span class="hljs-string">"2021-01-01"</span>)
</code></pre>
<p><strong>2. Data Preprocessing</strong></p>
<p>Once you have your data, it's crucial to preprocess it. This involves cleaning and transforming the data into a suitable format for analysis. You may also need to handle missing values and outliers.</p>
<p><strong>3. Feature Engineering</strong></p>
<p>In stock analysis, technical indicators play a significant role. These indicators can be calculated from historical price data and provide valuable insights. Common technical indicators include moving averages (e.g., Simple Moving Average or SMA), Relative Strength Index (RSI), Moving Average Convergence Divergence (MACD), and more.</p>
<pre><code class="lang-python">data[<span class="hljs-string">'SMA'</span>] = data[<span class="hljs-string">'Close'</span>].rolling(window=<span class="hljs-number">20</span>).mean()
data[<span class="hljs-string">'RSI'</span>] = compute_rsi(data[<span class="hljs-string">'Close'</span>])
data[<span class="hljs-string">'MACD'</span>] = compute_macd(data[<span class="hljs-string">'Close'</span>])
</code></pre>
<p><strong>4. Model Selection</strong></p>
<p>For stock price prediction, machine learning models come into play. While there are various models to choose from, let's consider a straightforward example using a Random Forest Regressor from the <code>scikit-learn</code> library.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.ensemble <span class="hljs-keyword">import</span> RandomForestRegressor
model = RandomForestRegressor(n_estimators=<span class="hljs-number">100</span>, random_state=<span class="hljs-number">42</span>)
</code></pre>
<p><strong>5. Data Splitting</strong></p>
<p>To evaluate the model's performance, you need to split your data into training and testing sets. The training set is used to train the model, while the testing set assesses its predictive capabilities.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.model_selection <span class="hljs-keyword">import</span> train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=<span class="hljs-number">0.2</span>, random_state=<span class="hljs-number">42</span>)
</code></pre>
<p><strong>6. Model Training</strong></p>
<p>With your data prepared and split, you can now train your machine learning model. This involves feeding historical features (e.g., technical indicators) to the model to predict future stock prices.</p>
<pre><code class="lang-python">model.fit(X_train, y_train)
</code></pre>
<p><strong>7. Model Evaluation</strong></p>
<p>After training, it's crucial to evaluate the model's performance. Common evaluation metrics for regression tasks include Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and R-squared (R2).</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> sklearn.metrics <span class="hljs-keyword">import</span> mean_absolute_error
y_pred = model.predict(X_test)
mae = mean_absolute_error(y_test, y_pred)
print(<span class="hljs-string">f"Mean Absolute Error: <span class="hljs-subst">{mae}</span>"</span>)
</code></pre>
<p>Conclusion</p>
<p>Incorporating machine learning into stock technical analysis using Python can enhance your ability to predict price movements and make informed investment decisions. However, it's essential to remember that stock markets are influenced by various factors, and no model can guarantee accurate predictions. Therefore, always use these models as tools to aid your analysis rather than relying solely on automated decisions. Additionally, continually update your models and adapt to changing market conditions to maximize their effectiveness.</p>
]]></content:encoded></item><item><title><![CDATA[Navigating the Challenges and Solutions of GenAI: A Comprehensive Guide]]></title><description><![CDATA[Introduction
Artificial General Intelligence (AGI), often referred to as GenAI, represents a technological frontier that promises transformative changes in our society. However, with great power comes great responsibility. In this article, we explore...]]></description><link>https://deviloper.in/navigating-the-challenges-and-solutions-of-genai-a-comprehensive-guide</link><guid isPermaLink="true">https://deviloper.in/navigating-the-challenges-and-solutions-of-genai-a-comprehensive-guide</guid><category><![CDATA[genai]]></category><category><![CDATA[#cybersecurity]]></category><category><![CDATA[Ethical Concerns]]></category><category><![CDATA[risk assesment]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 14 Sep 2023 21:11:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/jIBMSMs4_kA/upload/9d2ce20e0517b20941df8c2a4e5ddac1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Introduction</p>
<p>Artificial General Intelligence (AGI), often referred to as GenAI, represents a technological frontier that promises transformative changes in our society. However, with great power comes great responsibility. In this article, we explore the significant disadvantages associated with GenAI and the strategies to overcome these challenges, ensuring that we harness its potential while mitigating its drawbacks.</p>
<p><strong>Disadvantages of GenAI</strong></p>
<ol>
<li><p><em>Ethical Concerns</em></p>
<p>AGI's capacity for autonomous decision-making raises ethical dilemmas. For example, in autonomous vehicles, if an AGI system must choose between two potential accident scenarios, who bears responsibility for its decision? To address this, ethical frameworks and regulations must be established, accompanied by oversight bodies to ensure accountability [1].</p>
</li>
<li><p><em>Job Displacement</em></p>
<p>The rise of AGI could lead to massive job displacement as it outperforms humans in various industries. For instance, in manufacturing, AGI-powered robots can perform repetitive tasks more efficiently. Reskilling and education programs, such as the one initiated by the European Union [2], can help workers transition to new roles and protect the workforce.</p>
</li>
<li><p><em>Security Risks</em></p>
<p>AGI poses significant security risks if misused. Robust cybersecurity measures are essential. For example, AI-powered threat detection systems can help protect AGI systems from hacking attempts [3]. Additionally, authentication and access controls, like those used in blockchain technology, can safeguard AGI from unauthorized access.</p>
</li>
<li><p><em>Lack of Accountability</em></p>
<p>As AGI becomes more autonomous, attributing responsibility becomes complex. Systems for tracing and attributing AGI actions, like blockchain-based audit trails, should be put in place, alongside clear lines of responsibility [4].</p>
</li>
<li><p><em>Bias and Fairness</em></p>
<p>AGI systems can inherit biases from their training data, perpetuating societal biases. For example, facial recognition software trained on biased data may misidentify certain ethnic groups. Ongoing auditing and diversifying AI development teams are crucial to mitigate bias [5].</p>
</li>
<li><p><em>Dependency and Control</em></p>
<p>Striking the right balance between AGI and human control is essential. AGI should complement human capabilities, not replace them. For example, in healthcare, AGI can assist doctors in diagnosis but should not replace their expertise. Human-AI collaboration in decision-making processes can help achieve this balance [6].</p>
</li>
<li><p><em>Privacy Concerns</em></p>
<p>AGI's data analysis capabilities can threaten individual privacy. Strengthened data protection regulations, like the European Union's GDPR, and practices such as federated learning can protect personal data [7]. Federated learning allows AI models to be trained locally on users' devices without sharing raw data.</p>
</li>
<li><p><em>Economic Inequality</em></p>
<p>The development and deployment of AGI may exacerbate economic inequality. Policies to redistribute AGI benefits, such as a progressive tax system, and considering safety nets like universal basic income can address this issue [8].</p>
</li>
<li><p><em>Unintended Consequences</em></p>
<p>The complexity of AGI systems makes predicting their behavior challenging. For example, in autonomous finance, AGI trading algorithms can lead to market volatility. Research and development should focus on understanding and mitigating potential risks and consequences [9].</p>
</li>
</ol>
<p><strong>Overcoming the Disadvantages</strong></p>
<ol>
<li><p><strong>Ethical Frameworks and Regulations:</strong> Establish clear ethical guidelines and regulations for AGI development and deployment, backed by oversight bodies [1].</p>
</li>
<li><p><strong>Job Transition and Reskilling:</strong> Invest in education and training programs to help individuals transition to new job roles and protect the workforce [2].</p>
</li>
<li><p><strong>Cybersecurity Measures:</strong> Enhance cybersecurity to protect AGI systems from potential threats and unauthorized access [3].</p>
</li>
<li><p><strong>Accountability Mechanisms:</strong> Develop systems for tracing and attributing AGI actions, ensuring clear lines of responsibility [4].</p>
</li>
<li><p><strong>Bias Mitigation:</strong> Continuously audit and improve AGI algorithms to reduce bias and promote diversity in AI development teams [5].</p>
</li>
<li><p><strong>Human-AI Collaboration:</strong> Encourage collaboration between humans and AI in decision-making processes [6].</p>
</li>
<li><p><strong>Data Privacy Protections:</strong> Strengthen data protection regulations and implement data anonymization and encryption practices [7].</p>
</li>
<li><p><strong>Economic Policies:</strong> Implement policies to redistribute AGI benefits and consider safety nets like universal basic income [8].</p>
</li>
<li><p><strong>Risk Assessment and Mitigation:</strong> Invest in research to understand and mitigate the potential risks and unintended consequences of AGI [9].</p>
</li>
</ol>
<p>Conclusion</p>
<p>GenAI holds the promise of revolutionizing our world, but it comes with its share of challenges. By proactively addressing ethical concerns, job displacement, security risks, and other issues, we can maximize the benefits of AGI while minimizing its drawbacks. The key lies in a collaborative effort involving governments, industries, researchers, and society to ensure a responsible and sustainable future with GenAI.</p>
<p>References:</p>
<p>[1] "Ethics Guidelines for Trustworthy AI." European Commission, April 2019.</p>
<p>[2] "Skills for Industry: The European Commission's Blueprint for Sectoral Cooperation on Skills." European Commission, December 2020.</p>
<p>[3] Doshi, Amisha et al. "Adversarial Attacks on Machine Learning Systems for Remote Sensing: Challenges and Future Directions." arXiv preprint arXiv:2012.11592, 2020.</p>
<p>[4] Tapscott, Don, and Alex Tapscott. "Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World." Penguin, 2016.</p>
<p>[5] Obermeyer, Ziad et al. "Dissecting racial bias in an algorithm used to manage the health of populations." Science, 2019.</p>
<p>[6] Topol, Eric J. "High-Performance Medicine: The Convergence of Human and Artificial Intelligence." Nature Medicine, 2019.</p>
<p>[7] "General Data Protection Regulation (GDPR)." European Union, May 2018.</p>
<p>[8] Bessen, James E. "AI and Jobs: The Role of Demand." NBER Working Paper No. 24235, 2018.</p>
<p>[9] Verma, Hitesh et al. "Algorithmic Trading: A Review and Evaluation of Recent Strategies." AI &amp; Society, 2020.</p>
<p>Reference Links:</p>
<ol>
<li><a target="_blank" href="https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai">Ethics Guidelines for Trustworthy AI - European Commission</a></li>
<li><a target="_blank" href="https://ec.europa.eu/social/main.jsp?catId=89&amp;langId=en&amp;pubId=8411">Skills for Industry: The European Commission's Blueprint for Sectoral Cooperation on Skills - European Commission</a></li>
<li><a target="_blank" href="https://arxiv.org/abs/2012.11592">Adversarial Attacks on Machine Learning Systems for Remote Sensing - arXiv</a></li>
<li><a target="_blank" href="https://www.penguin.co.uk/books/280678/blockchain-revolution/">Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World - Penguin</a></li>
<li><a target="_blank" href="https://science.sciencemag.org/content/366/6464/447">Dissecting racial bias in an algorithm used to manage the health of populations - Science</a></li>
<li><a target="_blank" href="https://www.nature.com/articles/s41591-018-0300-7">High-Performance Medicine: The Convergence of Human and Artificial Intelligence - Nature Medicine</a></li>
<li><a target="_blank" href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679">General Data Protection Regulation (GDPR) - European Union</a></li>
<li><a target="_blank" href="https://www.nber.org/papers/w24235">AI and Jobs: The Role of Demand - NBER</a></li>
<li><a target="_blank" href="https://link.springer.com/article/10.1007/s00146-020-00998-4">Algorithmic Trading: A Review and Evaluation of Recent Strategies - AI &amp; Society</a></li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Unsung Heroes: Top 25 Lesser-Known Python Libraries to Simplify Your Daily Workflow.]]></title><description><![CDATA[Discovering the Hidden Gems
Python's extensive library ecosystem is a goldmine of tools that can turbocharge your daily coding tasks. While libraries like NumPy, Pandas, and Matplotlib are celebrated, a host of lesser-known Python libraries can revol...]]></description><link>https://deviloper.in/unsung-heroes-top-25-lesser-known-python-libraries-to-simplify-your-daily-workflow</link><guid isPermaLink="true">https://deviloper.in/unsung-heroes-top-25-lesser-known-python-libraries-to-simplify-your-daily-workflow</guid><category><![CDATA[Python]]></category><category><![CDATA[Libraries]]></category><category><![CDATA[learnprogramming]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 14 Sep 2023 20:39:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Hzp-1ua8DVE/upload/a7cd6d4b0535edb7870fb4580d0123a5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-discovering-the-hidden-gems">Discovering the Hidden Gems</h1>
<p>Python's extensive library ecosystem is a goldmine of tools that can turbocharge your daily coding tasks. While libraries like NumPy, Pandas, and Matplotlib are celebrated, a host of lesser-known Python libraries can revolutionize your daily work. In this article, we'll delve into the top 25 hidden gems that may have slipped under your radar but hold immense potential to enhance your productivity. Each library will be accompanied by practical examples to illustrate its utility.</p>
<h2 id="heading-1-arrow-simplify-date-and-time-handling">1. Arrow - Simplify Date and Time Handling</h2>
<p>Dealing with dates and times can be a source of frustration, but Arrow simplifies the process with its user-friendly API. Say goodbye to obscure datetime formats and cumbersome calculations.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> arrow

now = arrow.now()
tomorrow = now.shift(days=<span class="hljs-number">1</span>)
print(tomorrow.format(<span class="hljs-string">'YYYY-MM-DD'</span>))
</code></pre>
<h2 id="heading-2-fuzzywuzzy-fuzzy-string-matching">2. Fuzzywuzzy - Fuzzy String Matching</h2>
<p>Fuzzywuzzy is your secret weapon for approximate string matching. It calculates similarity scores between strings, making it perfect for tasks like deduplication or record linkage.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> fuzzywuzzy <span class="hljs-keyword">import</span> fuzz

similarity = fuzz.ratio(<span class="hljs-string">"apple"</span>, <span class="hljs-string">"apples"</span>)
print(similarity)  <span class="hljs-comment"># Output: 91</span>
</code></pre>
<h2 id="heading-3-pyquery-jquery-like-web-scraping">3. Pyquery - jQuery-Like Web Scraping</h2>
<p>Pyquery simplifies web scraping with its intuitive jQuery-like syntax. No more wrestling with complex selectors; Pyquery makes extracting data from HTML or XML documents a breeze.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pyquery <span class="hljs-keyword">import</span> PyQuery <span class="hljs-keyword">as</span> pq

html = <span class="hljs-string">"&lt;div&gt;&lt;p&gt;Hello, World!&lt;/p&gt;&lt;/div&gt;"</span>
doc = pq(html)
text = doc(<span class="hljs-string">'p'</span>).text()
print(text)  <span class="hljs-comment"># Output: Hello, World!</span>
</code></pre>
<h2 id="heading-4-prettytable-create-readable-tables">4. PrettyTable - Create Readable Tables</h2>
<p>Displaying data in tabular form is made elegant with PrettyTable. It allows you to generate attractive ASCII tables from your data, making your reports and presentations shine.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> prettytable <span class="hljs-keyword">import</span> PrettyTable

table = PrettyTable()
table.field_names = [<span class="hljs-string">"Name"</span>, <span class="hljs-string">"Age"</span>]
table.add_row([<span class="hljs-string">"Alice"</span>, <span class="hljs-number">28</span>])
table.add_row([<span class="hljs-string">"Bob"</span>, <span class="hljs-number">35</span>])
print(table)
</code></pre>
<h2 id="heading-5-rich-beautify-terminal-output">5. Rich - Beautify Terminal Output</h2>
<p>Rich enhances terminal-based applications, enabling you to create colourful text-based user interfaces for command-line applications. Impress your users with visually appealing outputs.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> rich <span class="hljs-keyword">import</span> <span class="hljs-keyword">print</span>

print(<span class="hljs-string">"[bold green]Hello, World![/bold green]"</span>)
</code></pre>
<h2 id="heading-6-typer-rapid-cli-development">6. Typer - Rapid CLI Development</h2>
<p>Typer is your go-to library for rapidly developing command-line interfaces (CLIs). It simplifies argument parsing and helps text generation, reducing the hassle of CLI development.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> typer

app = typer.Typer()

<span class="hljs-meta">@app.command()</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">greet</span>(<span class="hljs-params">name: str</span>):</span>
    typer.echo(<span class="hljs-string">f"Hello, <span class="hljs-subst">{name}</span>!"</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    app()
</code></pre>
<h2 id="heading-7-dateutil-powerful-date-and-time-handling">7. dateutil - Powerful Date and Time Handling</h2>
<p>Working with dates and times just got a whole lot easier with dateutil. This library offers extensive functionality for parsing and manipulating dates and times, surpassing Python's built-in datetime module.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> dateutil <span class="hljs-keyword">import</span> parser

date_string = <span class="hljs-string">"2023-09-30T15:30:00"</span>
parsed_date = parser.parse(date_string)
print(parsed_date)
</code></pre>
<h2 id="heading-8-tqdm-beautiful-progress-bars-for-loops">8. tqdm - Beautiful Progress Bars for Loops</h2>
<p>Visualizing the progress of tasks within loops is a breeze with tqdm. Say goodbye to boring print statements and hello to elegant progress bars that keep you informed.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> tqdm <span class="hljs-keyword">import</span> tqdm
<span class="hljs-keyword">import</span> time

<span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> tqdm(range(<span class="hljs-number">100</span>)):
    time.sleep(<span class="hljs-number">0.1</span>)  <span class="hljs-comment"># Simulate some work</span>
</code></pre>
<h2 id="heading-9-pandasql-sql-queries-on-pandas-dataframes">9. pandasql - SQL Queries on Pandas DataFrames</h2>
<p>If you love working with Pandas DataFrames and SQL, pandasql is your perfect match. It seamlessly integrates SQL queries with Pandas DataFrames, allowing you to leverage your SQL skills for data manipulation.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">from</span> pandasql <span class="hljs-keyword">import</span> sqldf

df = pd.DataFrame({<span class="hljs-string">'A'</span>: [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>], <span class="hljs-string">'B'</span>: [<span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>]})
pysqldf = sqldf()
result = pysqldf(<span class="hljs-string">"SELECT * FROM df WHERE A &gt; 1"</span>)
print(result)
</code></pre>
<h2 id="heading-10-tinydb-lightweight-nosql-database">10. TinyDB - Lightweight NoSQL Database</h2>
<p>TinyDB is a small but mighty NoSQL database that's perfect for small to medium-sized projects. It's easy to use and doesn't require a separate server, making it a versatile choice.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> tinydb <span class="hljs-keyword">import</span> TinyDB, Query

db = TinyDB(<span class="hljs-string">'my_db.json'</span>)
db.insert({<span class="hljs-string">'name'</span>: <span class="hljs-string">'Alice'</span>, <span class="hljs-string">'age'</span>: <span class="hljs-number">30</span>})
db.insert({<span class="hljs-string">'name'</span>: <span class="hljs-string">'Bob'</span>, <span class="hljs-string">'age'</span>: <span class="hljs-number">25</span>})

User = Query()
result = db.search(User.name == <span class="hljs-string">'Alice'</span>)
print(result)
</code></pre>
<h2 id="heading-11-unidecode-transliterate-unicode-to-ascii">11. Unidecode - Transliterate Unicode to ASCII</h2>
<p>Unidecode comes to the rescue when you need to transliterate Unicode text into ASCII characters. It simplifies text normalization and handling non-ASCII characters.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> unidecode <span class="hljs-keyword">import</span> unidecode

text = <span class="hljs-string">"Tōkyō"</span>
ascii_text = unidecode(text)
print(ascii_text)  <span class="hljs-comment"># Output: "Tokyo"</span>
</code></pre>
<h2 id="heading-12-tablib-handle-tabular-data-formats-with-ease">12. tablib - Handle Tabular Data Formats with Ease</h2>
<p>Tablib simplifies the handling of tabular data formats like Excel, CSV, and JSON. It's a versatile library that can save you time when dealing with diverse data sources.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> tablib

data = tablib.Dataset()
data.headers = [<span class="hljs-string">'Name'</span>, <span class="hljs-string">'Age'</span>]
data.append([<span class="hljs-string">'Alice'</span>, <span class="hljs-number">28</span>])
data.append([<span class="hljs-string">'Bob'</span>, <span class="hljs-number">35</span>])

<span class="hljs-comment"># Export to Excel</span>
<span class="hljs-keyword">with</span> open(<span class="hljs-string">'data.xlsx'</span>, <span class="hljs-string">'wb'</span>) <span class="hljs-keyword">as</span> f:
    f.write(data.export(<span class="hljs-string">'xlsx'</span>))
</code></pre>
<h2 id="heading-13-psutil-system-monitoring-and-management">13. psutil - System Monitoring and Management</h2>
<p>Psutil provides a wealth of information about system utilization, allowing you to monitor processes and manage system resources efficiently.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> psutil

<span class="hljs-comment"># Get CPU usage</span>
cpu_usage = psutil.cpu_percent()
print(<span class="hljs-string">f'CPU Usage: <span class="hljs-subst">{cpu_usage}</span>%'</span>)

<span class="hljs-comment"># List running processes</span>
processes = psutil.process_iter(attrs=[<span class="hljs-string">'pid'</span>, <span class="hljs-string">'name'</span>])
<span class="hljs-keyword">for</span> process <span class="hljs-keyword">in</span> processes:
    print(process.info())
</code></pre>
<h2 id="heading-14-apscheduler-job-scheduling-in-python">14. APScheduler - Job Scheduling in Python</h2>
<p>APScheduler empowers you to schedule Python functions to run at specific intervals or times. It's a handy tool for automating repetitive tasks.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> apscheduler.schedulers.blocking <span class="hljs-keyword">import</span> BlockingScheduler

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">job_to_run</span>():</span>
    print(<span class="hljs-string">"Scheduled job ran!"</span>)

scheduler = BlockingScheduler()
scheduler.add_job(job_to_run, <span class="hljs-string">'interval'</span>, seconds=<span class="hljs-number">10</span>)
scheduler.start()
</code></pre>
<h2 id="heading-15-docx2txt-extract-text-from-docx-files">15. docx2txt - Extract Text from DOCX Files</h2>
<p>Docx2txt is a valuable library for extracting text content from Microsoft Word (DOCX) files. It's particularly useful for document analysis and data extraction.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> docx2txt <span class="hljs-keyword">import</span> process

text = process(<span class="hljs-string">"document.docx"</span>)
print(text)
</code></pre>
<h2 id="heading-16-pyperclip-simplify-clipboard-operations">16. pyperclip - Simplify Clipboard Operations</h2>
<p>Pyperclip simplifies clipboard operations in Python, making it easy to copy and paste text across different applications and platforms.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pyperclip

<span class="hljs-comment"># Copy text to clipboard</span>
pyperclip.copy(<span class="hljs-string">"This text is copied to the clipboard"</span>)

<span class="hljs-comment"># Paste text from clipboard</span>
clipboard_text = pyperclip.paste()
print(clipboard_text)
</code></pre>
<h2 id="heading-17-mtranslate-multilingual-text-translation">17. mtranslate - Multilingual Text Translation</h2>
<p>Mtranslate is a powerful library for multilingual text translation. It offers quick and easy translation capabilities to multiple languages using various translation services, making it an asset for globalized applications.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> mtranslate <span class="hljs-keyword">import</span> translate

text = <span class="hljs-string">"Hello, World!"</span>
translated_text = translate(text, <span class="hljs-string">'es'</span>)  <span class="hljs-comment"># Translate to Spanish</span>
print(translated_text)
</code></pre>
<h2 id="heading-18-simpy-discrete-event-simulation">18. simpy - Discrete Event Simulation</h2>
<p>Simpy is a discrete event simulation library that's invaluable for modeling and simulating complex systems, such as traffic flow, supply chains, or resource allocation scenarios.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> simpy

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">car</span>(<span class="hljs-params">env</span>):</span>
    <span class="hljs-keyword">while</span> <span class="hljs-literal">True</span>:
        print(<span class="hljs-string">f"Car at <span class="hljs-subst">{env.now}</span>"</span>)
        <span class="hljs-keyword">yield</span> env.timeout(<span class="hljs-number">2</span>)

env = simpy.Environment()
env.process(car(env))
env.run(until=<span class="hljs-number">10</span>)
</code></pre>
<h2 id="heading-19-phonenumbers-phone-number-parsing-and-formatting">19. phonenumbers - Phone Number Parsing and Formatting</h2>
<p>Phonenumbers simplifies phone number parsing, formatting, and validation. It's an essential tool for applications dealing with international phone numbers.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> phonenumbers

phone_number = phonenumbers.parse(<span class="hljs-string">"+1 650-253-0000"</span>, <span class="hljs-literal">None</span>)
formatted_number = phonenumbers.format_number(phone_number, phonenumbers.PhoneNumberFormat.E164)
print(formatted_number)
</code></pre>
<h2 id="heading-20-isort-import-sorting-and-formatting">20. isort - Import Sorting and Formatting</h2>
<p>Isort takes care of sorting and formatting your import statements in Python code. It ensures consistency and readability in your codebase.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Before running isort</span>
<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> date
<span class="hljs-keyword">import</span> os

<span class="hljs-comment"># After running isort</span>
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> datetime <span class="hljs-keyword">import</span> date
</code></pre>
<h2 id="heading-21-emoji-working-with-emojis-in-python">21. emoji - Working with Emojis in Python</h2>
<p>The emoji library simplifies working with emojis in Python, making it easy to insert emojis into strings or analyze emoji usage in text.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> emoji

text = <span class="hljs-string">"I love Python! :snake:"</span>
emoji_text = emoji.emojize(text)
print(emoji_text)
</code></pre>
<h2 id="heading-22-pydub-audio-processing-in-python">22. Pydub - Audio Processing in Python</h2>
<p>Pydub is a versatile library for audio processing in Python. It can be used for tasks like audio format conversion, slicing, and more.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pydub <span class="hljs-keyword">import</span> AudioSegment

sound = AudioSegment.from_file(<span class="hljs-string">"my_audio.mp3"</span>)
<span class="hljs-comment"># Convert to WAV format</span>
sound.export(<span class="hljs-string">"output.wav"</span>, format=<span class="hljs-string">"wav"</span>)
</code></pre>
<h2 id="heading-23-textract-text-extraction-from-various-file-formats">23. textract - Text Extraction from Various File Formats</h2>
<p>Textract is a handy library for extracting text content from a wide range of file formats, including PDFs, Word documents, and more.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> textract

text = textract.process(<span class="hljs-string">"document.pdf"</span>)
print(text.decode(<span class="hljs-string">"utf-8"</span>))
</code></pre>
<h2 id="heading-24-pdfminer-pdf-parsing-and-text-extraction">24. PDFMiner - PDF Parsing and Text Extraction</h2>
<p>PDFMiner is an underrated PDF parsing library that allows you to extract text and metadata from PDF files. It's highly customizable and useful for data extraction.</p>
<p>Example:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pdfminer.high_level <span class="hljs-keyword">import</span> extract_text

text = extract_text(<span class="hljs-string">"document.pdf"</span>)
print(text)
</code></pre>
<h2 id="heading-25-pytube-youtube-data-manipulation">25. Pytube - YouTube Data Manipulation</h2>
<p>Pytube simplifies working with YouTube data and videos. It allows you to download, manipulate, or extract information from YouTube videos.</p>
<p>Example (Downloading a YouTube video):</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> pytube <span class="hljs-keyword">import</span> YouTube

url = <span class="hljs-string">"https://www.youtube.com/watch?v=example_video_id"</span>
yt = YouTube(url)
stream = yt.streams.get_highest_resolution()
stream.download(output_path=<span class="hljs-string">"downloads/"</span>)
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By incorporating these lesser-known Python libraries into your toolkit, you'll not only simplify your daily work but also expand your capabilities as a Python programmer. So, embrace these hidden gems, explore their potential, and let them enhance your Python development journey.</p>
<p>Thank you for joining us on this exploration of unsung Python libraries, and we hope you find these tools useful in your coding endeavour.</p>
<hr />
<p>That concludes our article on unsung Python libraries! We hope you've discovered valuable tools to enhance your daily coding tasks. If you have any more questions or need further guidance, feel free to ask. Happy coding!</p>
]]></content:encoded></item><item><title><![CDATA[Mastering MLOps: Model Training, Versioning, and Deployment with AWS]]></title><description><![CDATA[Introduction
In the world of Machine Learning Operations (MLOps), model training, versioning, and deployment are critical steps that ensure the successful deployment and management of machine learning models in production environments. Leveraging the...]]></description><link>https://deviloper.in/mastering-mlops-model-training-versioning-and-deployment-with-aws</link><guid isPermaLink="true">https://deviloper.in/mastering-mlops-model-training-versioning-and-deployment-with-aws</guid><category><![CDATA[mlops]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[ModelTraining]]></category><category><![CDATA[ModelDeployment]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 20 Jul 2023 22:58:08 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In the world of Machine Learning Operations (MLOps), model training, versioning, and deployment are critical steps that ensure the successful deployment and management of machine learning models in production environments. Leveraging the power of Amazon Web Services (AWS), data scientists and engineers can streamline this process and build efficient, scalable, and reliable machine learning pipelines. In this blog, we will guide you through the best practices and tools for model training, versioning, and deployment using AWS for MLOps.</p>
<h3 id="heading-model-training-with-amazon-sagemaker"><strong>Model Training with Amazon SageMaker</strong></h3>
<p>Amazon SageMaker, a fully managed machine learning service, empowers data scientists to build and train models at scale. Follow these steps for efficient model training:</p>
<p>a. <strong>Data Preparation</strong>: Preprocess and clean your data using AWS Glue or other relevant services to ensure high-quality inputs for training.</p>
<p>b. <strong>Choose an Algorithm</strong>: Select an appropriate machine learning algorithm from SageMaker's vast library based on your use case and data type.</p>
<p>c. <strong>Hyperparameter Tuning</strong>: Optimize model performance by using SageMaker's automatic hyperparameter tuning or custom grid search.</p>
<p>d. <strong>Distributed Training</strong>: Scale model training using SageMaker's distributed training capabilities for large datasets.</p>
<h3 id="heading-model-versioning-with-sagemaker-model-registry">Model Versioning with SageMaker Model Registry</h3>
<p>Versioning models is crucial for effective model management and collaboration. SageMaker Model Registry simplifies model versioning:</p>
<p>a. <strong>Model Packaging</strong>: Package trained models into containers using Docker for easy version control.</p>
<p>b. <strong>Model Registration</strong>: Register model versions in the Model Registry, enabling easy tracking and management.</p>
<p>c. <strong>Model Approval Workflow</strong>: Implement approval workflows for model versions to ensure proper validation before deployment.</p>
<h3 id="heading-model-deployment-with-sagemaker-endpoints">Model Deployment with SageMaker Endpoints</h3>
<p>Once you have trained and versioned your models, deploy them for real-time predictions using SageMaker Endpoints:</p>
<p>a. <strong>Endpoint Deployment</strong>: Create an endpoint for deploying models as RESTful APIs, accessible over HTTP/HTTPS.</p>
<p>b. <strong>Auto Scaling</strong>: Set up auto-scaling to handle varying workloads and ensure seamless performance.</p>
<p>c. <strong>Monitoring and Logging</strong>: Utilize Amazon CloudWatch to monitor endpoint health and capture logs for troubleshooting.</p>
<h3 id="heading-continuous-deployment-with-aws-codepipeline">Continuous Deployment with AWS CodePipeline</h3>
<p>Automate the end-to-end model deployment process using AWS CodePipeline:</p>
<p>a. <strong>CI/CD Pipelines</strong>: Create continuous integration and continuous deployment pipelines to automate model training, testing, and deployment.</p>
<p>b. <strong>Integration with SageMaker</strong>: Integrate SageMaker model training and endpoint deployment into your CodePipeline.</p>
<p>c. <strong>Automated Model Updates</strong>: Set up automated model updates whenever a new version is registered in the Model Registry.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By following the best practices and leveraging AWS services, you can streamline model training, versioning, and deployment, making MLOps a seamless and efficient process. Amazon SageMaker simplifies model training, AWS CodePipeline automates continuous deployment, and the SageMaker Model Registry facilitates versioning and collaboration. With AWS as your ally, you can confidently deploy and manage machine learning models at scale, unlocking the full potential of MLOps for your organization.</p>
]]></content:encoded></item><item><title><![CDATA[Streamlining Data Preparation for MLOps with AWS: Best Practices and Tools]]></title><description><![CDATA[Introduction
Data preparation is a critical step in any Machine Learning Operations (MLOps) workflow. It involves collecting, cleaning, and transforming raw data into a format suitable for model training and deployment. With AWS's powerful suite of d...]]></description><link>https://deviloper.in/streamlining-data-preparation-for-mlops-with-aws-best-practices-and-tools</link><guid isPermaLink="true">https://deviloper.in/streamlining-data-preparation-for-mlops-with-aws-best-practices-and-tools</guid><category><![CDATA[#datapreparation]]></category><category><![CDATA[data-engineering]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[mlops]]></category><category><![CDATA[DataTransformation]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 20 Jul 2023 22:35:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1689892661475/f84dafa1-e4ae-463e-b12d-43f6021afa27.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Data preparation is a critical step in any <strong>Machine Learning Operations</strong> (<strong>MLOps</strong>) workflow. It involves collecting, cleaning, and transforming raw data into a format suitable for model training and deployment. With AWS's powerful suite of data management services, data scientists can streamline this process to ensure high-quality data inputs and efficient machine learning pipelines. In this article, we will explore the best practices and AWS tools to perform data preparation effectively for MLOps.</p>
<ol>
<li><strong>Data Collection and Storage</strong></li>
</ol>
<p>AWS offers Amazon S3, a reliable and scalable object storage service, to store vast amounts of raw and processed data securely. Use S3 buckets to organize and manage data for different machine learning projects. Ensure data versioning and encryption to maintain data integrity and security.</p>
<p>- Use Amazon S3 (Simple Storage Service) to store raw data securely and durably.</p>
<p>- Upload your datasets to S3 buckets, organizing them by project or dataset types.</p>
<ol>
<li><strong>Data Exploration and Cleaning</strong></li>
</ol>
<p>Before proceeding with model training, explore the data using AWS Glue or Amazon Athena. Identify missing values, outliers, and inconsistencies that might affect the model's performance. Cleanse the data by removing irrelevant features and handling missing values appropriately.</p>
<p>- You can use AWS Glue or Amazon Athena to query and analyze data directly from S3.</p>
<p>- AWS Glue can also help you discover the schema of your data and catalog it.</p>
<ol>
<li><strong>Data Transformation</strong></li>
</ol>
<p>AWS Glue ETL jobs can be leveraged to transform data into the required format for machine learning models. Use AWS Glue's DataBrew for visual data preparation tasks, simplifying data transformations even for non-technical users.</p>
<p>- Utilize AWS Glue or AWS DataBrew to clean and transform raw data into a suitable format for training.</p>
<ol>
<li><strong>Data Sampling and Splitting</strong></li>
</ol>
<p>To avoid overfitting and ensure unbiased model evaluation, split the dataset into training, validation, and testing sets using AWS Glue or SageMaker Processing Jobs.</p>
<p>- Divide the prepared data into training, validation, and testing sets.</p>
<p>- You can use AWS services like Amazon SageMaker or AWS DataBrew to perform data splitting and sampling.</p>
<ol>
<li><strong>Feature Engineering</strong></li>
</ol>
<p>AWS SageMaker provides a range of built-in algorithms and tools for feature engineering. Utilize these capabilities to create new features that can improve model performance.</p>
<ol>
<li><strong>Data Versioning and Tracking</strong></li>
</ol>
<p>AWS SageMaker Model Registry enables versioning of data and models. Ensure that data scientists and engineers can access and track data changes throughout the development and deployment lifecycle.</p>
<p>- Employ version control systems like AWS CodeCommit or GitHub to manage changes to your data preparation scripts and configurations.</p>
<p>- This enables you to track the evolution of data preprocessing steps and roll back if necessary.</p>
<ol>
<li><strong>Automated Data Pipelines</strong></li>
</ol>
<p>Build automated data pipelines using AWS Step Functions or Apache Airflow to orchestrate the data preparation workflow seamlessly. This ensures consistency and reduces manual errors.</p>
<p>- Construct data pipelines using AWS services like AWS Data Pipeline or AWS Step Functions to automate data ingestion and preprocessing.</p>
<p>- These pipelines can schedule and orchestrate the data preparation process, making it efficient and reproducible.</p>
<ol>
<li><strong>Data Security and Compliance</strong></li>
</ol>
<p>Implement AWS Identity and Access Management (IAM) roles and policies to control access to data. Ensure compliance with data protection regulations to safeguard sensitive information.</p>
<ol>
<li><strong>Monitoring and Logging</strong></li>
</ol>
<p>Set up monitoring using Amazon CloudWatch to track data processing and identify any issues in real-time. Log data preparation steps to facilitate debugging and performance optimization.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Efficient data preparation is the foundation of successful MLOps. AWS provides a comprehensive set of services that empower data scientists to handle data effectively for machine learning projects. By following best practices and leveraging AWS tools like Amazon S3, AWS Glue, and SageMaker, organizations can streamline data preparation, leading to more accurate and reliable machine learning models. As MLOps continues to evolve, leveraging AWS services will play a pivotal role in achieving seamless data management and driving innovation in the world of machine learning.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering MLOps: Empowering Machine Learning with AWS Services]]></title><description><![CDATA[Introduction
Machine Learning Operations (MLOps) has emerged as a critical discipline that bridges the gap between data science and software engineering. MLOps aims to streamline the development, deployment, and management of machine learning models,...]]></description><link>https://deviloper.in/mastering-mlops-empowering-machine-learning-with-aws-services</link><guid isPermaLink="true">https://deviloper.in/mastering-mlops-empowering-machine-learning-with-aws-services</guid><category><![CDATA[mlops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[CI/CD]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Thu, 20 Jul 2023 21:54:41 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction"><strong>Introduction</strong></h2>
<p><strong>Machine Learning Operations (MLOps)</strong> has emerged as a critical discipline that bridges the gap between data science and software engineering. MLOps aims to streamline the development, deployment, and management of machine learning models, making them more efficient, scalable, and reliable in real-world applications. AWS, as a leading cloud provider, offers a comprehensive suite of services that empower organizations to implement MLOps practices seamlessly. In this article, we will explore how AWS services can be leveraged to master the art of MLOps.</p>
<h3 id="heading-data-management-and-preparation"><strong>Data Management and Preparation</strong></h3>
<p>The foundation of any successful machine learning project lies in the quality and management of data. AWS provides various services that facilitate data storage, transformation, and preparation:</p>
<p>a. <strong>Amazon S3</strong>: Amazon Simple Storage Service (S3) is a highly scalable object storage service that allows data scientists to store and manage vast amounts of data securely.</p>
<p>b. <strong>AWS Glue</strong>: AWS Glue is a fully managed extract, transform, and load (ETL) service that simplifies data preparation tasks. It can automatically discover and catalog metadata from various data sources, making it easier to create efficient data pipelines.</p>
<h3 id="heading-model-development-and-training"><strong>Model Development and Training</strong></h3>
<p>AWS provides a range of services to support model development and training, offering flexibility and scalability to data scientists:</p>
<p>a. <strong>Amazon SageMaker</strong>: Amazon SageMaker is a fully managed machine learning service that enables data scientists and developers to build, train, and deploy machine learning models at scale. It supports popular machine learning frameworks like TensorFlow, PyTorch, and MXNet.</p>
<p>b. <strong>AWS Deep Learning AMIs</strong>: AWS Deep Learning AMIs provide pre-configured environments with optimized deep learning frameworks and tools, reducing the time spent on setting up development environments.</p>
<h3 id="heading-model-versioning-and-tracking"><strong>Model Versioning and Tracking</strong></h3>
<p>Effective model versioning and tracking are crucial for collaboration and reproducibility:</p>
<p>a. <strong>Amazon SageMaker Model Registry</strong>: The SageMaker Model Registry enables the versioning and management of trained models, facilitating collaboration between data scientists and engineers.</p>
<p>b. <strong>Git Integration</strong>: By integrating AWS services with version control systems like Git, data scientists can effectively track changes to code, data, and model configurations.</p>
<h3 id="heading-continuous-integration-and-continuous-deployment-cicd"><strong>Continuous Integration and Continuous Deployment (CI/CD)</strong></h3>
<p>The CI/CD approach automates the machine learning workflow, ensuring efficient and reliable model deployment:</p>
<p>a. <strong>AWS CodePipeline</strong>: CodePipeline is a continuous integration and continuous deployment service that automates the end-to-end ML model deployment process. It connects various AWS services and triggers actions based on code commits or model updates.</p>
<p>b. <strong>AWS CodeBuild</strong>: CodeBuild automates the building and packaging of machine learning models, ensuring consistency across development and production environments.</p>
<h3 id="heading-model-deployment-and-inference">Model Deployment and Inference</h3>
<p>After model training, deploying and serving the models effectively is vital for successful MLOps:</p>
<p>a. <strong>Amazon SageMaker Endpoints</strong>: SageMaker Endpoints allow easy deployment of trained models as RESTful APIs, making it seamless to integrate machine learning predictions into applications.</p>
<p>b. <strong>AWS Lambda</strong>: AWS Lambda can be used for serverless model inference, providing a cost-efficient and scalable solution for real-time predictions.</p>
<h3 id="heading-monitoring-and-logging"><strong>Monitoring and Logging</strong></h3>
<p>Monitoring and logging are essential for maintaining the health and performance of deployed models:</p>
<p>a. <strong>Amazon CloudWatch</strong>: CloudWatch helps monitor and gain insights into the performance of deployed models, enabling proactive troubleshooting and issue resolution.</p>
<p>b. <strong>Amazon CloudWatch Logs</strong>: CloudWatch Logs allows you to capture and store logs, making it easier to track model behavior and potential errors.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Implementing MLOps using AWS services is a powerful way to streamline machine learning workflows, enabling organizations to develop, deploy, and manage machine learning models efficiently and effectively. By leveraging AWS's diverse range of services, data scientists and engineers can build scalable, reliable, and secure machine learning solutions for a wide range of applications. As MLOps continues to evolve, AWS's commitment to providing cutting-edge tools and services will undoubtedly play a crucial role in driving innovation and success in the world of machine learning.</p>
]]></content:encoded></item><item><title><![CDATA[OOPs Introduction -Java]]></title><description><![CDATA[OOPs is known as “Object Oriented Programming”.OOP is a Programming paradigm or a feature based on the concept of “Objects”. A programming paradigm means way of organizing a program.  The two major programming paradigms are:

Structured Programming p...]]></description><link>https://deviloper.in/oops-introduction-java</link><guid isPermaLink="true">https://deviloper.in/oops-introduction-java</guid><category><![CDATA[Java]]></category><category><![CDATA[OOPS]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Object Oriented Programming]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Sat, 26 Feb 2022 22:29:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1645914245949/-f-FbOIKO.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>OOPs is known as “Object Oriented Programming”.OOP is a Programming paradigm or a feature based on the concept of “<strong>Objects</strong>”. A programming paradigm means way of organizing a program.  The two major programming paradigms are:</p>
<ul>
<li><strong>Structured Programming paradigm</strong> - adopted by <strong>C,Pascal</strong></li>
<li><strong>Object Oriented Programming (OOP) paradigm </strong>- adopted by <strong>C++,Java,C#,VB.NET</strong></li>
</ul>
<p>Before we move towards <strong>OOPs </strong> concepts Lets talk a bit about <strong>Structured Programming paradigm</strong> and it's disadvantages.</p>
<h2 id="heading-structured-programming-paradigm"><strong>Structured Programming paradigm</strong></h2>
<ol>
<li>Emphasis on breaking the given task into smaller sub-tasks</li>
<li>For each sub-task functions are written</li>
<li>These functions are called directly or indirectly from main()</li>
<li>No importance given to data, it is just passed from one function to another as required.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645914312868/y191p1o5N.png" alt="Fig.-1.1-Structure-of-procedural-oriented-programs.png" /></p>
<h4 id="heading-structured-programming-in-everyday-life">Structured Programming in Everyday Life</h4>
<p><strong>1. Sequence</strong>  Execute a list of statements in order.</p>
<p><strong><em>Example: Baking Bread</em></strong> Add flour.
Add salt.
Add yeast.
Mix.
Add water.
Knead.
Let rise.
Bake.</p>
<p><strong>2. Repetition</strong>  Repeat a block of statements while a condition is true.</p>
<p><strong><em>Example: Washing Dishes</em></strong> Stack dishes by sink.
Fill sink with hot soapy water.
While moreDishes
  Get dish from counter,
  Wash dish,
  Put dish in drain rack.
End While
Wipe off counter.
Rinse out sink.</p>
<p><strong>3. Selection</strong>  Choose at most one action from several alternative conditions.</p>
<p><strong><em>Example: Sorting Mail</em></strong> Get mail from mailbox.
Put mail on table.
While moreMailToSort
  Get piece of mail from table.
  If pieceIsPersonal Then
    Read it.
  ElseIf pieceIsMagazine Then
    Put in magazine rack.
  ElseIf pieceIsBill Then
    Pay it,
  ElseIf pieceIsJunkMail Then
    Throw in wastebasket.
  End If
End While</p>
<h4 id="heading-disadvantages-of-structured-programming">Disadvantages of Structured programming</h4>
<ol>
<li>The primary components of structured programming - functions and data structures - didn't model the real world problems in a natural way.</li>
<li>Mechanisms to reuse existing code were limited.</li>
<li>Maintaining, debugging and upgrading large programs were a difficult task.</li>
</ol>
<h2 id="heading-oops-object-oriented-programing">OOPs (Object Oriented Programing)</h2>
<ol>
<li>Emphasis on identifying objects in a given problem and then writing programs to facilitate interaction between objects</li>
<li>Objects contain data and functions that can access/manipulate the data</li>
<li>Equal importance to data as data and functions go together.</li>
<li>Object-oriented programming aims to implement real-world entities like inheritance, hiding, polymorphism etc. in programming</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645914337920/bA7jU0HMd.png" alt="java-oops.png" /></p>
<h4 id="heading-characteristics-of-oops">Characteristics of OOPs</h4>
<ol>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/javaOO/objects.html">Objects</a></strong>  : In Structured Programming a problem is approached by dividing it into functions. Unlike this, in OOPs the problem is divided into objects.  Thinking in terms of objects rather than functions make the designing of program easier.  For example: </p>
<ol>
<li>Employees in a Payroll  Processing System</li>
<li>GUI elements like windows, menus ,icons, etc.</li>
<li>Elements in games like guns, characters, vehicle, etc.</li>
</ol>
</li>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/javaOO/classes.html">Classes</a></strong> : The class is one of the Basic concepts of OOPs which is a group of similar entities. t is only a logical component and not the physical entity. A class serves as a blueprint or a plan or a template. It specifies what data and what functions will be included in objects of that type.  For example:</p>
<ol>
<li>if you had a class called “Expensive Cars” it could have objects like Mercedes, BMW, Toyota, etc.</li>
<li>if you had an online shopping system it could have objects such as “shopping cart”, “customer”, and “product”</li>
<li>if you had a house it could have objects like rooms, restrooms, kitchen etc</li>
</ol>
</li>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/IandI/subclasses.html">Inheritance</a></strong> : Inheritance is an important pillar of OOP(Object Oriented Programming). It is the mechanism in java by which one class is allow to inherit the features(fields and methods) of another class. </p>
<p>Let us discuss some of frequent used important terminologies:</p>
<ul>
<li><strong>Super Class:</strong> The class whose features are inherited is known as superclass(or a base class or a parent class).</li>
<li><strong>Sub Class:</strong> The class that inherits the other class is known as subclass(or a derived class, extended class, or child class). The subclass can add its own fields and methods in addition to the superclass fields and methods.</li>
<li><strong>Reusability:</strong> Inheritance supports the concept of “reusability”, i.e. when we want to create a new class and there is already a class that includes some of the code that we want, we can derive our new class from the existing class. By doing this, we are reusing the fields and methods of the existing class.</li>
</ul>
</li>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/IandI/abstract.html">Abstraction</a></strong> : Data Abstraction is the property by virtue of which only the essential details are displayed to the user. The trivial or the non-essentials units are not displayed to the user. Ex: A car is viewed as a car rather than its individual components.
Data Abstraction may also be defined as the process of identifying only the required characteristics of an object ignoring the irrelevant details. The properties and behaviors of an object differentiate it from other objects of similar type and also help in classifying/grouping the objects.
Consider a real-life example of a man driving a car. The man only knows that pressing the accelerators will increase the speed of car or applying brakes will stop the car but he does not know about how on pressing the accelerator the speed is actually increasing, he does not know about the inner mechanism of the car or the implementation of accelerator, brakes etc in the car. This is what abstraction is. 
In java, abstraction is achieved by <strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/IandI/createinterface.html">interfaces</a></strong> and <strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/IandI/abstract.html">abstract classes</a></strong>. We can achieve 100% abstraction using interfaces.</p>
</li>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html">Encapsulation</a></strong> : It is defined as the wrapping up of data under a single unit. It is the mechanism that binds together code and the data it manipulates. Another way to think about encapsulation is, it is a protective shield that prevents the data from being accessed by the code outside this shield. </p>
<ul>
<li>Technically in encapsulation, the variables or data of a class is hidden from any other class and can be accessed only through any member function of own class in which they are declared.</li>
<li>As in encapsulation, the data in a class is hidden from other classes, so it is also known as <strong>data-hiding</strong>.</li>
<li>Encapsulation can be achieved by Declaring all the variables in the class as private and writing public methods in the class to set and get the values of variables.</li>
</ul>
</li>
<li><p><strong><a target="_blank" href="https://docs.oracle.com/javase/tutorial/java/IandI/polymorphism.html">Polymorphism</a></strong></p>
<p>It refers to the ability of OOPs programming languages to differentiate between entities with the same name efficiently. This is done by Java with the help of the signature and declaration of these entities. </p>
<blockquote>
<p><strong>Note:</strong> Polymorphism in Java are mainly of 2 types: </p>
<ol>
<li>Overloading</li>
<li>Overriding </li>
</ol>
</blockquote>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[How to Read Emails Using Python?]]></title><description><![CDATA[How to Read Emails Using Python.
As we know Python is being used widely across every domain. And I bet, every programmer had thought about building some kind of virtual assistance(let's call it VA) after watching "Iron Man Movie". In VA we add logic ...]]></description><link>https://deviloper.in/how-to-read-emails-using-python</link><guid isPermaLink="true">https://deviloper.in/how-to-read-emails-using-python</guid><category><![CDATA[Python 3]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Mon, 31 Jan 2022 00:03:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643585329054/H3Jlmm-Ic.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-how-to-read-emails-using-python">How to Read Emails Using Python.</h1>
<p>As we know Python is being used widely across every domain. And I bet, every programmer had thought about building some kind of virtual assistance(let's call it <strong>VA</strong>) after watching "<strong>Iron Man Movie</strong>". In <strong>VA</strong> we add logic to do different tasks like opening some application, searching on the web, solving mathematical calculations, weather updates, Reminders, To-dos and this can go on and on. So, You might be thinking...</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766144747411/464bd250-be09-4b5b-a099-af4a9f1fa524.jpeg" alt class="image--center mx-auto" /></p>
<p>​ Come Straight to the Point</p>
<p><strong>Okay, But Hindustani Bhau has something to say to you!!</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766144842079/b0c6e751-2ca8-4c3f-8270-0c0169485d07.jpeg" alt class="image--center mx-auto" /></p>
<p>​ Please be patient</p>
<p>We can add an Email Reader as well in your <strong>VA</strong>.To do so we need to first understand these few things mentioned below.</p>
<ul>
<li><p><strong>Libraries to communicate with email services providers</strong></p>
</li>
<li><p><strong>A purpose like downloading Bills, Tracking shopping orders, data analytics, reminders, follow-ups,auto-replies and much more dependence on the need</strong></p>
</li>
</ul>
<p>Let's get started with some technicalities and as an example will serve one purpose through python. Excited Right??</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766144910758/66a46ffb-23c1-4bc1-9e6d-b256083c62a7.jpeg" alt class="image--center mx-auto" /></p>
<p>​ Now it's going to be fun</p>
<h3 id="heading-libraries-imaptoolshttpsgithubcomikvkimaptools-or-imaplibhttpsdocspythonorg3libraryimaplibhtml">Libraries: <a target="_blank" href="https://github.com/ikvk/imap_tools">imap_tools</a> or <a target="_blank" href="https://docs.python.org/3/library/imaplib.html">imaplib</a></h3>
<h4 id="heading-imaptoolshttpsgithubcomikvkimaptools"><a target="_blank" href="https://github.com/ikvk/imap_tools">imap_tools</a></h4>
<p>The easiest tool I've found for reading emails in Python is <a target="_blank" href="https://github.com/ikvk/imap_tools">imap_tools</a>. It has an elegant interface to communicate with your email provider using IMAP (which almost every email provider will have).</p>
<p>First, you access the MailBox; for which you need to get the IMAP server and login credentials (username and password). You should be able to find this in your email provider's help or settings (e.g. <a target="_blank" href="https://support.google.com/a/answer/9003945">here's a guide for Gmail</a>).</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> imap_tools <span class="hljs-keyword">import</span> MailBox, AND

<span class="hljs-comment"># Server is the address of the IMAP server</span>
mb = MailBox(server).login(user, password)
</code></pre>
<p>Then you can search for messages based on <a target="_blank" href="https://tools.ietf.org/html/rfc3501#section-6.4.4">RFC 3501 Search Criteria</a>. There are lots of examples in the <a target="_blank" href="https://github.com/ikvk/imap_tools#search-criteria">imap_tools README</a>; you can search based on the sender, subject, text, date, and others.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Fetch all unseen emails containing "xyz.com" in the from field</span>
<span class="hljs-comment"># Don't mark them as seen</span>
<span class="hljs-comment"># Set bulk=True to read them all into memory in one fetch</span>
<span class="hljs-comment"># (as opposed to in streaming which is slower but uses less memory)</span>
messages = mb.fetch(criteria=AND(seen=<span class="hljs-literal">False</span>, from_=<span class="hljs-string">"xyz.com"</span>),
                        mark_seen=<span class="hljs-literal">False</span>,
                        bulk=<span class="hljs-literal">True</span>)
</code></pre>
<p>Then you can access things like the subject, from address, date, and text and HTML content using <a target="_blank" href="https://github.com/ikvk/imap_tools#email-attributes">simple attributes</a>.</p>
<pre><code class="lang-python">files = []
<span class="hljs-keyword">for</span> msg <span class="hljs-keyword">in</span> messages:
    <span class="hljs-comment"># Print form and subject</span>
    print(msg.from_, <span class="hljs-string">': '</span>, msg.subject)
    <span class="hljs-comment"># Print the plain text (if there is one)</span>
    print(msg.text)
    <span class="hljs-comment"># Add attachments</span>
    files += [att.payload <span class="hljs-keyword">for</span> att <span class="hljs-keyword">in</span> msg.attachments <span class="hljs-keyword">if</span> att.filename.endswith(<span class="hljs-string">'.pdf'</span>)]
</code></pre>
<p>It also handles <a target="_blank" href="https://github.com/ikvk/imap_tools#actions-with-emails">actions</a> on emails such as flagging as seen, moving, and deleting messages.</p>
<h4 id="heading-imaplibhttpsdocspythonorg3libraryimaplibhtml"><a target="_blank" href="https://docs.python.org/3/library/imaplib.html">imaplib</a></h4>
<p>Python has the built-in <a target="_blank" href="https://docs.python.org/3/library/imaplib.html">imaplib</a> for IMAP and <a target="_blank" href="https://docs.python.org/3/library/email.html">email</a> for processing emails. Unfortunately, they're quite a low level and require a bit more work to use than imap_tools.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> imaplib
<span class="hljs-keyword">import</span> email

mb = imaplib.IMAP4_SSL(server)
rv, mesasge = mb.login(user, password)
<span class="hljs-comment"># 'OK', [b'LOGIN completed']</span>
rv, num_emails = M.select(<span class="hljs-string">'Inbox'</span>)
<span class="hljs-comment"># 'OK', [b'22']</span>

<span class="hljs-comment"># Get unread messages</span>
rv, messages = M.search(<span class="hljs-literal">None</span>, <span class="hljs-string">'UNSEEN'</span>)
<span class="hljs-comment"># 'OK', [b'21 22']</span>

<span class="hljs-comment"># Download a message</span>
typ, data = M.fetch(<span class="hljs-string">b'21'</span>, <span class="hljs-string">'(RFC822)'</span>)

<span class="hljs-comment"># Parse the email</span>
msg = email.message_from_bytes(data[<span class="hljs-number">0</span>][<span class="hljs-number">1</span>])
print(msg[<span class="hljs-string">'From'</span>], <span class="hljs-string">":"</span>, msg[<span class="hljs-string">'Subject'</span>])

<span class="hljs-comment"># Print the Plain Text (is this always the plain text?)</span>
print(msg.get_payload()[<span class="hljs-number">0</span>].get_payload())
</code></pre>
<p>Once you go through these libraries you will get an idea about how to serve a specific purpose according to your requirement. Then you will be like...</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766144972595/8200245d-d031-4dd5-8aa2-efd439a6ee2c.jpeg" alt class="image--center mx-auto" /></p>
<p>​ I know everything, I'm an expert!!</p>
<h3 id="heading-purpose-lets-make-an-email-alert-with-a-voice-in-python">Purpose: Let's make an Email Alert with a voice in python.</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766145017343/6312b4d3-0c0c-47ee-a314-7c2dcac54680.jpeg" alt class="image--center mx-auto" /></p>
<p>​ Then, Do It</p>
<p>In order to make a solution for this purpose we have to do certain things as listed below:</p>
<ul>
<li><p>Install required Libraries such as</p>
<ul>
<li><p><a target="_blank" href="https://pypi.org/project/gTTS/"><strong>GTTS (Google Text to Speech)</strong></a> : for text to speech conversion.</p>
</li>
<li><p><a target="_blank" href="https://pypi.org/project/playsound/"><strong>playsound</strong></a> : to play a audio files.</p>
</li>
</ul>
</li>
<li><p>Fetch the latest unread emails from the email provider.</p>
</li>
<li><p>convert the text from email to Speech using <strong>GTTS</strong> and finally play it using <strong>playsound</strong></p>
</li>
</ul>
<p>Let's import all the required libraries we need in this script.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> imap_tools <span class="hljs-keyword">import</span> MailBox,AND
<span class="hljs-keyword">import</span> getpass
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">from</span> gtts <span class="hljs-keyword">import</span> gTTS
<span class="hljs-keyword">import</span> playsound
</code></pre>
<p>In the above, we have used the <a target="_blank" href="https://docs.python.org/3/library/getpass.html"><strong>getpass</strong></a> module to get the “login name” of the user. <a target="_blank" href="https://www.askpython.com/python-modules/python-json-module"><strong>json</strong></a> to read the <strong>mailconfig.json</strong> that holds the user credentials and server configs.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"mail"</span>:
    {
        <span class="hljs-attr">"ORG_EMAIL"</span>:<span class="hljs-string">"@example.com"</span>,
        <span class="hljs-attr">"FROM_EMAIL"</span>:<span class="hljs-string">"abc"</span>,
        <span class="hljs-attr">"FROM_PWD"</span>:<span class="hljs-string">"password"</span>,
        <span class="hljs-attr">"SMTP_SERVER"</span>:<span class="hljs-string">"imap.example.com"</span>,
        <span class="hljs-attr">"SMTP_PORT"</span>:<span class="hljs-string">"993"</span>
    }
}
</code></pre>
<p><a target="_blank" href="https://github.com/ikvk/imap_tools"><strong>imap_tools</strong></a> for communication to the email service provider such as gmail,outlook etc.</p>
<p>Now Let's create two functions, <strong>read_email_from_email(username,configdata)</strong> that takes two parameters <strong>username</strong>(logged in user) and <strong>configdata</strong>(data from mailconfig.json) for communicating with the email provider and <strong>speak(text)</strong> that takes single parameter as String to convert it to speech.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">speak</span>(<span class="hljs-params">text</span>):</span>
    tts = gTTS(text, lang=<span class="hljs-string">'en'</span>) <span class="hljs-comment">#gtts API to convert text to speech</span>
    tts.save(<span class="hljs-string">"output.mp3"</span>) <span class="hljs-comment">#saving as the audio file</span>
    playsound.playsound(<span class="hljs-string">'output.mp3'</span>) <span class="hljs-comment">#playing from audio file</span>
</code></pre>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">read_email_from_email</span>(<span class="hljs-params">username,configdata</span>):</span>
    ORG_EMAIL   = configdata[<span class="hljs-string">'mail'</span>][<span class="hljs-string">'ORG_EMAIL'</span>]
    FROM_EMAIL  = configdata[<span class="hljs-string">'mail'</span>][<span class="hljs-string">'FROM_EMAIL'</span>] + ORG_EMAIL
    FROM_PWD    = configdata[<span class="hljs-string">'mail'</span>][<span class="hljs-string">'FROM_PWD'</span>]
    SMTP_SERVER = configdata[<span class="hljs-string">'mail'</span>][<span class="hljs-string">'SMTP_SERVER'</span>]
    mail = MailBox(SMTP_SERVER).login(FROM_EMAIL, FROM_PWD)
    messages = mail.fetch(criteria=AND(seen=<span class="hljs-literal">False</span>),mark_seen=<span class="hljs-literal">True</span>,bulk=<span class="hljs-literal">True</span>)

    msg=list(messages)
    count=len(msg)
    <span class="hljs-keyword">if</span> count&gt;<span class="hljs-number">0</span>:
        text=username+<span class="hljs-string">", You have an Email From "</span>+msg[<span class="hljs-number">0</span>].from_+<span class="hljs-string">",with a Subject Saying "</span>+msg[<span class="hljs-number">0</span>].subject
        print(text)
        speak(text)

    <span class="hljs-keyword">else</span>:
        print(<span class="hljs-string">"You Don't have any new emails!!"</span>)
        speak(<span class="hljs-string">"You Don't have any new emails"</span>)
</code></pre>
<p>In <strong>read_email_from_email</strong>, we are connecting the Mail server using <strong>Mailbox</strong> with user credentials. With <strong>fetch(criteria=AND(seen=False),mark_seen=True,bulk=True)</strong> we are fetching all emails that are <strong>unseen</strong> in <strong>bulk</strong> and also <strong>marking them as seen</strong>. Then we are just creating a String using <strong>From Email</strong> and <strong>Subject</strong>. When we run this script it will read the latest email and also mark it as <strong>Read</strong>. I have just used the first fetched message. But you can play around with it and process those data according to your requirement.</p>
<p>I hope this article helped you to get familiar with how to read emails using python. and I wish your reaction would be like this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766145132680/72798610-f578-4b39-9dc7-0c9c6d65fc69.jpeg" alt class="image--center mx-auto" /></p>
<p>​ I enjoyed It!!</p>
<p>Stay tuned for more exciting blogs related to How to-dos and Connect with me on my social handles to share the feedback or you can also comment your thoughts, I would love it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766145179720/01a02324-2ec2-409f-8bed-12f8602ddc18.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[How to generate and use an SSL certificate in NodeJS]]></title><description><![CDATA[In this article, we will see how can we generate an SSL certificate for our development server. And later on, we will see how can we use that certificate inside our application.
Let's Create a Demo App in Express js
To create a new npm project, let's...]]></description><link>https://deviloper.in/ssl-certificate-in-nodejs</link><guid isPermaLink="true">https://deviloper.in/ssl-certificate-in-nodejs</guid><category><![CDATA[Node.js]]></category><category><![CDATA[Express.js]]></category><category><![CDATA[SSL]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Wed, 12 Jan 2022 11:26:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1641986641610/-V02jcCMl.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we will see how can we generate an SSL certificate for our development server. And later on, we will see how can we use that certificate inside our application.</p>
<h2 id="heading-lets-create-a-demo-app-in-express-js">Let's Create a Demo App in Express js</h2>
<p>To create a new <strong>npm</strong> project, let's create a directory named "<strong>node-ssl-server</strong>" and open the <strong>node-ssl-server</strong> directory in the terminal using this command.</p>
<pre><code class="lang-cmd">cd node-ssl-server
</code></pre>
<p> Then run this command to create a new <strong>npm</strong> project.</p>
<pre><code class="lang-cmd"> npm init -y
</code></pre>
<p>Now let's install the dependency i.e <strong>express</strong>, to do so run this command:</p>
<pre><code class="lang-cmd">npm install --save express
</code></pre>
<p>Now let's create a start script in <strong>package.json</strong>, just add this line inside the "<strong>script{}</strong>"  as shown below:</p>
<pre><code class="lang-js"><span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"start"</span>:<span class="hljs-string">"node index.js"</span>
  },
</code></pre>
<p>you can also use nodemon if you have nodemon installed in your system like this:</p>
<pre><code class="lang-js"><span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"start"</span>:<span class="hljs-string">"nodemon index.js"</span>
  },
</code></pre>
<p>Now let's add a <strong>index.js</strong> file in our app and add few lines in it as shown below:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>) 
<span class="hljs-keyword">const</span> https = <span class="hljs-built_in">require</span>(<span class="hljs-string">"https"</span>) <span class="hljs-comment">// https module to create a ssl enabled server</span>
<span class="hljs-keyword">const</span> path = <span class="hljs-built_in">require</span>(<span class="hljs-string">"path"</span>) <span class="hljs-comment">// path module </span>
<span class="hljs-keyword">const</span> fs = <span class="hljs-built_in">require</span>(<span class="hljs-string">"fs"</span>) <span class="hljs-comment">//file system module</span>

<span class="hljs-keyword">const</span> app =express()

app.use(<span class="hljs-string">"/"</span>,<span class="hljs-function">(<span class="hljs-params">req,res,next</span>)=&gt;</span>{
    res.send(<span class="hljs-string">"hello from ssl secured server!!"</span>)
})

<span class="hljs-keyword">const</span> options ={
  <span class="hljs-attr">key</span>:<span class="hljs-string">''</span>,
  <span class="hljs-attr">cert</span>:<span class="hljs-string">''</span> 
}
<span class="hljs-keyword">const</span> sslserver =https.createServer(options,app)

sslserver.listen(port,<span class="hljs-function">()=&gt;</span>{<span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Secure Server is listening on port <span class="hljs-subst">${port}</span>`</span>)});
</code></pre>
<h2 id="heading-lets-generate-ssl-certificates">Let's Generate SSL Certificates</h2>
<p>before we proceed further let's create a directory to store the certificates inside our app folder. </p>
<pre><code class="lang-cmd">mkdir cert
</code></pre>
<p>now move to the <strong>cert</strong> directory using <strong>cd</strong> command</p>
<pre><code class="lang-cmd">cd cert
</code></pre>
<p>To generate the SSL Certificate we need to follow these steps as shown below:</p>
<ul>
<li>Generate a Private Key</li>
<li>Create a CSR ( certificate signing request) using the private key.</li>
<li>Generate the SSL certification from CSR</li>
</ul>
<h4 id="heading-generate-a-private-key">Generate a Private Key</h4>
<p>To generate a private key we will run this command as shown below:</p>
<pre><code class="lang-cmd"> openssl genrsa -out key.pem
</code></pre>
<p>Once we ran the above command it will generate the private key and save it in <strong>key.pem</strong> file inside <strong>cert</strong> directory and gives this type of message in the terminal.</p>
<pre><code class="lang-cmd">Generating RSA private key, 2048 bit long modulus
...+++
.................+++
e is 65537 (0x10001)
</code></pre>
<h4 id="heading-create-a-csr-certificate-signing-request">Create a CSR ( Certificate Signing Request)</h4>
<p>Since we are our own certificate authority, we need to use CSR to generate our certificate. To do so we need to run the below command.</p>
<pre><code class="lang-cmd">openssl req -new -key key.pem -out csr.pem
</code></pre>
<p>Once we ran this command it will ask a few questions as shown below:</p>
<pre><code class="lang-cmd">You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields, there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
</code></pre>
<p>you can skip any question by simply press enter else if you want to provide the details you can provide it, it's totally upto you.</p>
<p>Once you done with these question it will generate the CSR in <strong>csr.pem</strong> file inside <strong>cert</strong> folder.</p>
<h4 id="heading-generate-the-ssl-certificate">Generate the SSL Certificate</h4>
<p>Now for the final steps, we need to use the <strong>key.pem</strong>  and <strong>crs.pem</strong> files to generate our SSL certificate. </p>
<p>let's run the below command to generate it.</p>
<pre><code class="lang-cmd">openssl x509 -req -days 365 -in csr.pem -signkey key.pem -out cert.pem
</code></pre>
<h3 id="heading-note">Note:</h3>
<ul>
<li>we are using <a target="_blank" href="https://en.wikipedia.org/wiki/X.509">x509</a> because it is the standard defining the format of the public-key certificate.</li>
<li>we set the validity of the certificate as 365 days.</li>
</ul>
<p>After running the above command it will save the certificate in the <strong>cert.pem</strong> file inside <strong>cert</strong> folder. Now you can remove the <strong>csr.pem</strong> file or you can keep it.</p>
<h2 id="heading-integration-of-the-ssl-certificate-in-express">Integration of the SSL Certificate in Express</h2>
<p>Now let's use these certificates inside our app using <strong>file system (fs) and path module</strong>. To do so, we need to edit a few lines in our app as mentioned below:</p>
<p>Earlier we had created  a constant variable <strong>options</strong>. now we will update that part of the code by adding the path of the generated certificates inside it as shown below.</p>
<p>Before:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> options ={
  <span class="hljs-attr">key</span>:<span class="hljs-string">''</span>,
  <span class="hljs-attr">cert</span>:<span class="hljs-string">''</span> 
}
</code></pre>
<p>After:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> options ={
  <span class="hljs-attr">key</span>:fs.readFileSync(path.join(__dirname,<span class="hljs-string">'./certs/key.pem'</span>)),
  <span class="hljs-attr">cert</span>:fs.readFileSync(path.join(__dirname,<span class="hljs-string">'./certs/cert.pem'</span>)) 
}
</code></pre>
<p>Once it's done save it and run the server by </p>
<pre><code class="lang-cmd">npm start
</code></pre>
<p>You can check if HTTPS is working or not by just accessing it from this URL:</p>
<pre><code class="lang-chrome">https://localhost:3002
</code></pre>
<h2 id="heading-conclusion">Conclusion:</h2>
<ul>
<li>You might see <strong>Not Secure</strong> in your browser though we have a valid certificate, it is just because we have generated the certificate and it is not generated by some known certificate authorities, so, your browser doesn't Trust you as a valid certificate authority. But we should typically use this process for development purposes and for Production we should be using a certificate that is generated by a certificate authority like <strong>Let's Encrypt</strong>.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Next-Generation Javascript | Modern Javascript  Refresher]]></title><description><![CDATA[In this article, I provided a brief introduction to some 
core next-gen JavaScript features, of course focusing on 
the ones you'll see the most in this courseHere's a quick 
summary!
let & const
Read more about let : https://developer.mozilla.org/en...]]></description><link>https://deviloper.in/next-generation-javascript-refresher</link><guid isPermaLink="true">https://deviloper.in/next-generation-javascript-refresher</guid><category><![CDATA[js]]></category><category><![CDATA[React]]></category><category><![CDATA[ES6]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Mon, 10 Jan 2022 13:54:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1639584478296/ItulUgXk8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I provided a brief introduction to some 
core next-gen JavaScript features, of course focusing on 
the ones you'll see the most in this courseHere's a quick 
summary!</p>
<h2 id="heading-let-andamp-const">let &amp; const</h2>
<p>Read more about <strong>let</strong> : https://developer.mozilla.org/en-US/
docs/Web/JavaScript/Reference/Statements/let
Read more about <strong>const</strong> : https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/const</p>
<p><strong>let</strong> and <strong>const</strong> basically replace <strong>var</strong> You use <strong>let</strong>
instead of <strong>var</strong> and <strong>const</strong> instead of <strong>var</strong> if you plan on 
never re-assigning this "variable" (effectively turning it into a 
constant therefore).</p>
<h2 id="heading-es6-arrow-functions">ES6 Arrow Functions</h2>
<p>Read more: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions</p>
<p>Arrow functions are a different way of creating functions in 
JavaScriptBesides a shorter syntax, they offer advantages 
when it comes to keeping the scope of the <strong>this</strong> keyword 
(see <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions#No_binding_of_this">here</a>).</p>
<p>Arrow function syntax may look strange but it's actually 
simple.</p>
<pre><code class="lang-js"> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">callMe</span>(<span class="hljs-params">name</span>) </span>{ 
    <span class="hljs-built_in">console</span>.log(name);
 }
</code></pre>
<p>which you could also write as:</p>
<pre><code class="lang-js"> <span class="hljs-keyword">const</span> callMe = <span class="hljs-function"><span class="hljs-keyword">function</span>(<span class="hljs-params">name</span>) </span>{ 
    <span class="hljs-built_in">console</span>.log(name);
 }
</code></pre>
<p>becomes: </p>
<pre><code class="lang-js"> <span class="hljs-keyword">const</span> callMe = <span class="hljs-function">(<span class="hljs-params">name</span>) =&gt;</span> { 
    <span class="hljs-built_in">console</span>.log(name);
 }
</code></pre>
<h3 id="heading-important">Important:</h3>
<p>When having <strong>no arguments</strong>, you have to use empty 
parentheses in the function declaration:</p>
<pre><code class="lang-js"> <span class="hljs-keyword">const</span> callMe = <span class="hljs-function">() =&gt;</span> { 
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Max!'</span>);
 }
</code></pre>
<p>When having <strong>exactly one argument</strong>, you may omit the 
parentheses:</p>
<pre><code class="lang-js"> <span class="hljs-keyword">const</span> callMe = <span class="hljs-function"><span class="hljs-params">name</span> =&gt;</span> { 
    <span class="hljs-built_in">console</span>.log(name);
 }
</code></pre>
<p>When <strong>just returning a value</strong>, you can use the following 
shortcut:</p>
<pre><code class="lang-js"> <span class="hljs-keyword">const</span> returnMe = <span class="hljs-function"><span class="hljs-params">name</span> =&gt;</span> name
</code></pre>
<p>That's equal to:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> returnMe = <span class="hljs-function"><span class="hljs-params">name</span> =&gt;</span> { 
    <span class="hljs-keyword">return</span> name;
 }
</code></pre>
<h2 id="heading-exports-andamp-imports">Exports &amp; Imports</h2>
<p>In React projects (and actually in all modern JavaScript 
projects), you split your code across multiple JavaScript 
files - so-called modulesYou do this, to keep each file/ 
module focused and manageable.</p>
<p>To still access functionality in another file, you need <strong>export</strong>
(to make it available) and <strong>import</strong> (to get 
access) statements.</p>
<p>You got two different types of 
exports: <strong>default</strong> (unnamed) and <strong>named</strong> exports:</p>
<p>default =&gt; <strong>export default ...;</strong> </p>
<p>named =&gt; <strong>export const someData = ...;</strong></p>
<p>You can import <strong>default exports</strong> like this:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> someNameOfYourChoice <span class="hljs-keyword">from</span> <span class="hljs-string">'./path/to/file.js'</span>;
</code></pre>
<p>Surprisingly, someNameOfYourChoice is totally up to you.
<strong>Named</strong> exports have to be imported by their name:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> { someData } <span class="hljs-keyword">from</span> <span class="hljs-string">'./path/to/file.js'</span>;
</code></pre>
<p>A file can only contain one default and an unlimited amount 
of named exportsYou can also mix the one default with 
any amount of named exports in one and the same file.
When importing named exports, you can also import all 
<strong>named</strong> exports at once with the following syntax:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> * <span class="hljs-keyword">as</span> upToYou <span class="hljs-keyword">from</span> <span class="hljs-string">'./path/to/file.js'</span>;
</code></pre>
<p><strong>upToYou</strong> is - well - up to you and simply bundles all 
exported variables/functions in one JavaScript objectFor 
example, if you </p>
<pre><code class="lang-js"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> someData = ..(<span class="hljs-regexp">/path/</span>to/file.js )
</code></pre>
<p> you can access it on upToYou like 
this: </p>
<pre><code class="lang-js">upToYou.someData
</code></pre>
<h2 id="heading-classes">Classes</h2>
<p>Classes are a feature which basically replace constructor 
functions and prototypesYou can define blueprints for 
JavaScript objects with them</p>
<p>Like this:</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Person</span> </span>{
    <span class="hljs-keyword">constructor</span> () {
        <span class="hljs-built_in">this</span>.name = <span class="hljs-string">'Max'</span>;
    }
}

<span class="hljs-keyword">const</span> person = <span class="hljs-keyword">new</span> Person();
<span class="hljs-built_in">console</span>.log(person.name); <span class="hljs-comment">// prints 'Max'</span>
</code></pre>
<p>In the above example, not only the class but also a property 
of that class (=&gt; <strong>name</strong> ) is definedThey syntax you see 
there, is the "old" syntax for defining propertiesIn modern 
JavaScript projects (as the one used in this course), you 
can use the following, more convenient way of defining 
class properties:</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Person</span> </span>{
    name = <span class="hljs-string">'Max'</span>;
}

<span class="hljs-keyword">const</span> person = <span class="hljs-keyword">new</span> Person();
<span class="hljs-built_in">console</span>.log(person.name); <span class="hljs-comment">// prints 'Max'</span>
</code></pre>
<p>You can also define methodsEither like this:</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Person</span> </span>{
    name = <span class="hljs-string">'Max'</span>;
    printMyName () {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">this</span>.name); <span class="hljs-comment">// this is required to refer to the class!</span>
    }
}

<span class="hljs-keyword">const</span> person = <span class="hljs-keyword">new</span> Person();
person.printMyName();
</code></pre>
<p>Or like this:</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Person</span> </span>{
    name = <span class="hljs-string">'Max'</span>;
    printMyName = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">this</span>.name);
    }
}

<span class="hljs-keyword">const</span> person = <span class="hljs-keyword">new</span> Person();
person.printMyName();
</code></pre>
<p>The second approach has the same advantage as all arrow 
functions have: The <strong>this</strong> keyword doesn't change its 
reference.
You can also use <strong>inheritance</strong> when using classes:</p>
<pre><code class="lang-js"><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Human</span> </span>{
    species = <span class="hljs-string">'human'</span>;
}
.
class Person <span class="hljs-keyword">extends</span> Human {
    name = <span class="hljs-string">'Max'</span>;
    printMyName = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-built_in">this</span>.name);
}
}
.
const person = <span class="hljs-keyword">new</span> Person();
person.printMyName();
<span class="hljs-built_in">console</span>.log(person.species); <span class="hljs-comment">// prints 'human'</span>
</code></pre>
<h2 id="heading-spread-andamp-rest-operator">Spread &amp; Rest Operator</h2>
<p>The spread and rest operators actually use the same 
syntax: </p>
<pre><code class="lang-js">..
</code></pre>
<p>Yes, that is the operator - just three dotsIt's usage 
determines whether you're using it as the spread or rest 
operator.</p>
<h3 id="heading-using-the-spread-operator">Using the Spread Operator:</h3>
<p>The spread operator allows you to pull elements out of an 
array (=&gt; split the array into a list of its elements) or pull the 
properties out of an objectHere are two examples:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> oldArray = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>];
<span class="hljs-keyword">const</span> newArray = [...oldArray, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>]; <span class="hljs-comment">// This now is [1, 2, 3, 4, 5];</span>
</code></pre>
<p>Here's the spread operator used on an object:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> oldObject = {
    <span class="hljs-attr">name</span>: <span class="hljs-string">'Max'</span>
};
<span class="hljs-keyword">const</span> newObject = {
    ...oldObject,
    <span class="hljs-attr">age</span>: <span class="hljs-number">28</span>
};
</code></pre>
<p><strong>newObject</strong> would then be</p>
<pre><code class="lang-js">{
<span class="hljs-attr">name</span>: <span class="hljs-string">'Max'</span>,
<span class="hljs-attr">age</span>: <span class="hljs-number">28</span>
}
</code></pre>
<p>The spread operator is extremely useful for cloning arrays 
and objectsSince both are <a target="_blank" href="https://youtu.be/9ooYYRLdg_g">reference types (and not 
primitives)</a>, copying them safely (i.epreventing future 
mutation of the copied original) can be trickyWith the 
spread operator you have an easy way of creating a 
(shallow!) clone of the object or array</p>
<h2 id="heading-destructuring">Destructuring</h2>
<p>Destructuring allows you to easily access the values of 
arrays or objects and assign them to variables.
Here's an example for an array:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> array = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>];
<span class="hljs-keyword">const</span> [a, b] = array;
<span class="hljs-built_in">console</span>.log(a); <span class="hljs-comment">// prints 1</span>
<span class="hljs-built_in">console</span>.log(b); <span class="hljs-comment">// prints 2</span>
<span class="hljs-built_in">console</span>.log(array); <span class="hljs-comment">// prints [1, 2, 3]</span>
</code></pre>
<p>And here for an object:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> myObj = {
    <span class="hljs-attr">name</span>: <span class="hljs-string">'Max'</span>,
    <span class="hljs-attr">age</span>: <span class="hljs-number">28</span>
}
<span class="hljs-keyword">const</span> {name} = myObj;
<span class="hljs-built_in">console</span>.log(name); <span class="hljs-comment">// prints 'Max'</span>
<span class="hljs-built_in">console</span>.log(age); <span class="hljs-comment">// prints undefined</span>
<span class="hljs-built_in">console</span>.log(myObj); <span class="hljs-comment">// prints {name: 'Max', age: 28}</span>
</code></pre>
<p>Destructuring is very useful when working with function 
argumentsConsider this example:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> printName = <span class="hljs-function">(<span class="hljs-params">personObj</span>) =&gt;</span> {
<span class="hljs-built_in">console</span>.log(personObj.name);
}
printName({<span class="hljs-attr">name</span>: <span class="hljs-string">'Max'</span>, <span class="hljs-attr">age</span>: <span class="hljs-number">28</span>}); <span class="hljs-comment">// prints 'Max'</span>
</code></pre>
<p>Here, we only want to print the name in the function but we 
pass a complete person object to the functionOf course 
this is no issue but it forces us to call personObj.name 
inside of our functionWe can condense this code with 
destructuring:</p>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> printName = <span class="hljs-function">(<span class="hljs-params">{name}</span>) =&gt;</span> {
<span class="hljs-built_in">console</span>.log(name);
}
printName({<span class="hljs-attr">name</span>: <span class="hljs-string">'Max'</span>, <span class="hljs-attr">age</span>: <span class="hljs-number">28</span>}); <span class="hljs-comment">// prints 'Max')</span>
</code></pre>
<p>We get the same result as above but we save some code
By destructuring, we simply pull out the <strong>name</strong> property and 
store it in a variable/ argument named <strong>name</strong> which we then 
can use in the function body.</p>
<h2 id="heading-js-array-functions">JS Array Functions</h2>
<p>Not really next-gen JavaScript, but also important: JavaScript array functions like <strong>map</strong>() , <strong>filter</strong>() , <strong>reduce</strong>()  etc.</p>
<p>You'll see me use them quite a bit since a lot of React concepts rely on working with arrays (in immutable ways).</p>
<p>The following page gives a good overview over the various methods you can use on the array prototype - feel free to click through them and refresh your knowledge as required: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array</p>
<p>Particularly important in this course are:</p>
<p>map()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map</p>
<p>find()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find</p>
<p>findIndex()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/findIndex</p>
<p>filter()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter</p>
<p>reduce()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce?v=b</p>
<p>concat()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat?v=b</p>
<p>slice()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/slice</p>
<p>splice()  =&gt; https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice</p>
]]></content:encoded></item><item><title><![CDATA[Regular Expression in Python]]></title><description><![CDATA[A Regular Expressions (RegEx) is a special sequence of characters that uses a search pattern to find a string or set of strings. It can detect the presence or absence of a text by matching with a particular pattern, and also can split a pattern into ...]]></description><link>https://deviloper.in/regular-expression-in-python</link><guid isPermaLink="true">https://deviloper.in/regular-expression-in-python</guid><category><![CDATA[Python]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Tue, 28 Sep 2021 22:59:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869934264/ZQRp-SnOf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A Regular Expressions (RegEx) is a special sequence of characters that uses a search pattern to find a string or set of strings. It can detect the presence or absence of a text by matching with a particular pattern, and also can split a pattern into one or more sub-patterns. Python provides a re module that supports the use of regex in Python. </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> re

s = <span class="hljs-string">'GeeksforGeeks: A computer science portal for geeks'</span>

match = re.search(<span class="hljs-string">r'portal'</span>, s)

print(<span class="hljs-string">'Start Index:'</span>, match.start())
print(<span class="hljs-string">'End Index:'</span>, match.end())
</code></pre>
<pre><code><span class="hljs-keyword">Start</span> <span class="hljs-keyword">Index</span>: <span class="hljs-number">34</span>
<span class="hljs-keyword">End</span> <span class="hljs-keyword">Index</span>: <span class="hljs-number">40</span>
</code></pre><h2 id="metacharacters">MetaCharacters</h2>
<p>To understand the RE analogy, MetaCharacters are useful, important, and will be used in functions of module re. Below is the list of metacharacters.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869880562/eLi6aKpZg.png" alt="regex1.png" /></p>
<pre><code class="lang-python"><span class="hljs-comment">#examples</span>

<span class="hljs-keyword">import</span> re

s = <span class="hljs-string">'geeks.forgeeks'</span>

<span class="hljs-comment"># without using \</span>
match = re.search(<span class="hljs-string">r'.'</span>, s)
print(match)

<span class="hljs-comment"># using \</span>
match = re.search(<span class="hljs-string">r'\.'</span>, s)
print(match)
</code></pre>
<pre><code><span class="hljs-tag">&lt;<span class="hljs-name">re.Match</span> <span class="hljs-attr">object</span>; <span class="hljs-attr">span</span>=<span class="hljs-string">(0,</span> <span class="hljs-attr">1</span>), <span class="hljs-attr">match</span>=<span class="hljs-string">'g'</span>&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">re.Match</span> <span class="hljs-attr">object</span>; <span class="hljs-attr">span</span>=<span class="hljs-string">(5,</span> <span class="hljs-attr">6</span>), <span class="hljs-attr">match</span>=<span class="hljs-string">'.'</span>&gt;</span>
</code></pre><h3 id="special-sequences">Special Sequences</h3>
<p>Special sequences do not match for the actual character in the string instead it tells the specific location in the search string where the match must occur. It makes it easier to write commonly used patterns. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869894001/kzyt7I8nu.png" alt="regex2.png" /></p>
<h3 id="refindall">re.findall()</h3>
<p>Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found.</p>
<pre><code class="lang-python"><span class="hljs-comment"># A Python program to demonstrate working of</span>
<span class="hljs-comment"># findall()</span>
<span class="hljs-keyword">import</span> re

<span class="hljs-comment"># A sample text string where regular expression</span>
<span class="hljs-comment"># is searched.</span>
string = <span class="hljs-string">"""Hello my Number is 123456789 and
            my friend's number is 987654321"""</span>

<span class="hljs-comment"># A sample regular expression to find digits.</span>
regex = <span class="hljs-string">'\d+'</span>

match = re.findall(regex, string)
print(match)

<span class="hljs-comment"># This example is contributed by Ayush Saluja.</span>
</code></pre>
<pre><code>['<span class="hljs-number">123456789</span>', '<span class="hljs-number">987654321</span>']
</code></pre><h2 id="recompile">re.compile()</h2>
<p>Regular expressions are compiled into pattern objects, which have methods for various operations such as searching for pattern matches or performing string substitutions. </p>
<pre><code class="lang-python"><span class="hljs-comment"># Module Regular Expression is imported</span>
<span class="hljs-comment"># using __import__().</span>
<span class="hljs-keyword">import</span> re

<span class="hljs-comment"># compile() creates regular expression</span>
<span class="hljs-comment"># character class [a-e],</span>
<span class="hljs-comment"># which is equivalent to [abcde].</span>
<span class="hljs-comment"># class [abcde] will match with string with</span>
<span class="hljs-comment"># 'a', 'b', 'c', 'd', 'e'.</span>
p = re.compile(<span class="hljs-string">'[a-e]'</span>)

<span class="hljs-comment"># findall() searches for the Regular Expression</span>
<span class="hljs-comment"># and return a list upon finding</span>
print(p.findall(<span class="hljs-string">"Aye, said Mr. Gibenson Stark"</span>))
</code></pre>
<pre><code>['e', 'a', 'd', 'b', 'e', 'a']
</code></pre><pre><code class="lang-python">
<span class="hljs-keyword">import</span> re

<span class="hljs-comment"># \w is equivalent to [a-zA-Z0-9_].</span>
p = re.compile(<span class="hljs-string">'\w'</span>)
print(p.findall(<span class="hljs-string">"He said * in some_lang."</span>))

<span class="hljs-comment"># \w+ matches to group of alphanumeric character.</span>
p = re.compile(<span class="hljs-string">'\w+'</span>)
print(p.findall(<span class="hljs-string">"I went to him at 11 A.M., he \
said *** in some_language."</span>))

<span class="hljs-comment"># \W matches to non alphanumeric characters.</span>
p = re.compile(<span class="hljs-string">'\W'</span>)
print(p.findall(<span class="hljs-string">"he said *** in some_language."</span>))
</code></pre>
<pre><code>['H', 'e', 's', 'a', 'i', 'd', 'i', 'n', 's', 'o', 'm', 'e', '_', 'l', 'a', 'n', 'g']
['I', 'went', 'to', 'him', 'at', '<span class="hljs-number">11</span>', 'A', 'M', 'he', 'said', 'in', 'some_language']
[' ', ' ', '*', '*', '*', ' ', ' ', '.']
</code></pre><h2 id="resplit">re.split()</h2>
<p>Split string by the occurrences of a character or a pattern, upon finding that pattern, the remaining characters from the string are returned as part of the resulting list. </p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> re <span class="hljs-keyword">import</span> split

<span class="hljs-comment"># '\W+' denotes Non-Alphanumeric Characters</span>
<span class="hljs-comment"># or group of characters Upon finding ','</span>
<span class="hljs-comment"># or whitespace ' ', the split(), splits the</span>
<span class="hljs-comment"># string from that point</span>
print(split(<span class="hljs-string">'\W+'</span>, <span class="hljs-string">'Words, words , Words'</span>))
print(split(<span class="hljs-string">'\W+'</span>, <span class="hljs-string">"Word's words Words"</span>))

<span class="hljs-comment"># Here ':', ' ' ,',' are not AlphaNumeric thus,</span>
<span class="hljs-comment"># the point where splitting occurs</span>
print(split(<span class="hljs-string">'\W+'</span>, <span class="hljs-string">'On 12th Jan 2016, at 11:02 AM'</span>))

<span class="hljs-comment"># '\d+' denotes Numeric Characters or group of</span>
<span class="hljs-comment"># characters Splitting occurs at '12', '2016',</span>
<span class="hljs-comment"># '11', '02' only</span>
print(split(<span class="hljs-string">'\d+'</span>, <span class="hljs-string">'On 12th Jan 2016, at 11:02 AM'</span>))
</code></pre>
<pre><code>['Words', 'words', 'Words']
['Word', 's', 'words', 'Words']
['On', '<span class="hljs-number">12</span>th', 'Jan', '<span class="hljs-number">2016</span>', 'at', '<span class="hljs-number">11</span>', '<span class="hljs-number">02</span>', 'AM']
['On ', 'th Jan ', ', at ', ':', ' AM']
</code></pre><h2 id="resub">re.sub()</h2>
<p>The ‘sub’ in the function stands for SubString, a certain regular expression pattern is searched in the given string(3rd parameter), and upon finding the substring pattern is replaced by repl(2nd parameter), count checks and maintains the number of times this occurs. </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> re

<span class="hljs-comment"># Regular Expression pattern 'ub' matches the</span>
<span class="hljs-comment"># string at "Subject" and "Uber". As the CASE</span>
<span class="hljs-comment"># has been ignored, using Flag, 'ub' should</span>
<span class="hljs-comment"># match twice with the string Upon matching,</span>
<span class="hljs-comment"># 'ub' is replaced by '~*' in "Subject", and</span>
<span class="hljs-comment"># in "Uber", 'Ub' is replaced.</span>
print(re.sub(<span class="hljs-string">'ub'</span>, <span class="hljs-string">'~*'</span>, <span class="hljs-string">'Subject has Uber booked already'</span>,
            flags=re.IGNORECASE))

<span class="hljs-comment"># Consider the Case Sensitivity, 'Ub' in</span>
<span class="hljs-comment"># "Uber", will not be reaplced.</span>
print(re.sub(<span class="hljs-string">'ub'</span>, <span class="hljs-string">'~*'</span>, <span class="hljs-string">'Subject has Uber booked already'</span>))

<span class="hljs-comment"># As count has been given value 1, the maximum</span>
<span class="hljs-comment"># times replacement occurs is 1</span>
print(re.sub(<span class="hljs-string">'ub'</span>, <span class="hljs-string">'~*'</span>, <span class="hljs-string">'Subject has Uber booked already'</span>,
            count=<span class="hljs-number">1</span>, flags=re.IGNORECASE))

<span class="hljs-comment"># 'r' before the pattern denotes RE, \s is for</span>
<span class="hljs-comment"># start and end of a String.</span>
print(re.sub(<span class="hljs-string">r'\sAND\s'</span>, <span class="hljs-string">' &amp; '</span>, <span class="hljs-string">'Baked Beans And Spam'</span>,
            flags=re.IGNORECASE))
</code></pre>
<pre><code>S~<span class="hljs-emphasis">*ject has ~*</span>er booked already
S~<span class="hljs-emphasis">*ject has Uber booked already
S~*</span>ject has Uber booked already
Baked Beans &amp; Spam
</code></pre><h2 id="research">re.search()</h2>
<p>his method either returns None (if the pattern doesn’t match), or a re.MatchObject that contains information about the matching part of the string. This method stops after the first match, so this is best suited for testing a regular expression more than extracting data.</p>
<pre><code class="lang-python"><span class="hljs-comment"># A Python program to demonstrate working of re.match().</span>
<span class="hljs-keyword">import</span> re

<span class="hljs-comment"># Lets use a regular expression to match a date string</span>
<span class="hljs-comment"># in the form of Month name followed by day number</span>
regex = <span class="hljs-string">r"([a-zA-Z]+) (\d+)"</span>

match = re.search(regex, <span class="hljs-string">"I was born on June 24"</span>)

<span class="hljs-keyword">if</span> match != <span class="hljs-literal">None</span>:

    <span class="hljs-comment"># We reach here when the expression "([a-zA-Z]+) (\d+)"</span>
    <span class="hljs-comment"># matches the date string.</span>

    <span class="hljs-comment"># This will print [14, 21), since it matches at index 14</span>
    <span class="hljs-comment"># and ends at 21.</span>
    <span class="hljs-keyword">print</span> (<span class="hljs-string">"Match at index %s, %s"</span> % (match.start(), match.end()))

    <span class="hljs-comment"># We us group() method to get all the matches and</span>
    <span class="hljs-comment"># captured groups. The groups contain the matched values.</span>
    <span class="hljs-comment"># In particular:</span>
    <span class="hljs-comment"># match.group(0) always returns the fully matched string</span>
    <span class="hljs-comment"># match.group(1) match.group(2), ... return the capture</span>
    <span class="hljs-comment"># groups in order from left to right in the input string</span>
    <span class="hljs-comment"># match.group() is equivalent to match.group(0)</span>

    <span class="hljs-comment"># So this will print "June 24"</span>
    <span class="hljs-keyword">print</span> (<span class="hljs-string">"Full match: %s"</span> % (match.group(<span class="hljs-number">0</span>)))

    <span class="hljs-comment"># So this will print "June"</span>
    <span class="hljs-keyword">print</span> (<span class="hljs-string">"Month: %s"</span> % (match.group(<span class="hljs-number">1</span>)))

    <span class="hljs-comment"># So this will print "24"</span>
    <span class="hljs-keyword">print</span> (<span class="hljs-string">"Day: %s"</span> % (match.group(<span class="hljs-number">2</span>)))

<span class="hljs-keyword">else</span>:
    <span class="hljs-keyword">print</span> (<span class="hljs-string">"The regex pattern does not match."</span>)
</code></pre>
<pre><code><span class="hljs-string">Match</span> <span class="hljs-string">at</span> <span class="hljs-string">index</span> <span class="hljs-number">14</span><span class="hljs-string">,</span> <span class="hljs-number">21</span>
<span class="hljs-attr">Full match:</span> <span class="hljs-string">June</span> <span class="hljs-number">24</span>
<span class="hljs-attr">Month:</span> <span class="hljs-string">June</span>
<span class="hljs-attr">Day:</span> <span class="hljs-number">24</span>
</code></pre><p>For more on <a target="_blank" href="https://docs.python.org/3/howto/regex.html#:~:text=%20Using%20Regular%20Expressions%20%C2%B6%20%201%20Compiling,methods%3B%20the%20re%20module%20also%20provides...%20More%20">Regular expressions</a></p>
]]></content:encoded></item><item><title><![CDATA[Python Modules & Packages]]></title><description><![CDATA[If you want your code to be well organized, it’s a good idea to start by grouping related code.A module is basically a bunch of related code saved in a file with the extension .py. You may choose to define functions, classes, or variables in a module...]]></description><link>https://deviloper.in/python-modules-and-packages</link><guid isPermaLink="true">https://deviloper.in/python-modules-and-packages</guid><category><![CDATA[Python]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Tue, 28 Sep 2021 22:56:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869659000/AiHtc0NKJ.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you want your code to be well organized, it’s a good idea to start by grouping related code.<strong>A module is basically a bunch of related code saved in a file with the extension .py.</strong> You may choose to define functions, classes, or variables in a module. It’s also fine to include runnable code in modules.</p>
<pre><code class="lang-python"><span class="hljs-comment"># welcome.py</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">welcome_message</span>(<span class="hljs-params">course</span>):</span>
    print(<span class="hljs-string">"Thank you for subscribing to our "</span> + course + <span class="hljs-string">" course. You will get all the details in an email shortly."</span>)
</code></pre>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> welcome
welcome.welcome_message (<span class="hljs-string">"Python Essentials"</span>)
</code></pre>
<pre><code>Thank you <span class="hljs-keyword">for</span> subscribing <span class="hljs-keyword">to</span> our Python Essentials course. You will <span class="hljs-keyword">get</span> <span class="hljs-keyword">all</span> the details <span class="hljs-keyword">in</span> an email shortly.
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">from</span> welcome <span class="hljs-keyword">import</span> welcome_message
welcome_message (<span class="hljs-string">"Python Essentials"</span>)
</code></pre>
<pre><code>Thank you <span class="hljs-keyword">for</span> subscribing <span class="hljs-keyword">to</span> our Python Essentials course. You will <span class="hljs-keyword">get</span> <span class="hljs-keyword">all</span> the details <span class="hljs-keyword">in</span> an email shortly.
</code></pre><p>If you have some experience with Python, you’ve likely used modules.  For example, you may have used the:</p>
<ul>
<li><strong>random</strong> module to generate pseudo-random number generators for various distributions.</li>
<li><strong>html</strong> module to parse HTML pages.</li>
<li><strong>datetime</strong> module to manipulate date and time data.</li>
<li><strong>re</strong> module to detect and parse regular expressions in Python.</li>
</ul>
<p>Modules introduce numerous benefits into your Python code:</p>
<ul>
<li>Improved development process. Python modules help you focus on one small portion of a task rather than an entire problem. This simplifies the development process and makes it less prone to errors. Furthermore, modules are usually written in a way that minimizes interdependency. Thus, it’s more viable for a team of several programmers to work on the same application.</li>
<li>The functionality you define in one module can be used in different parts of an application, minimizing duplicate code.Separate namespaces. With Python modules, you can define separate namespaces to avoid collisions between identifiers in different parts of your application.</li>
</ul>
<h3 id="examples">Examples</h3>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> sys
sys.path
</code></pre>
<pre><code>['C:\\Users\\rahuldubey\\Documents\\My Learnings\\Training\\Python Advance\\<span class="hljs-number">6.</span> Modules &amp; Packages',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\python37.zip',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\DLLs',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3',
 '',
 'C:\\Users\\rahuldubey\\AppData\\Roaming\\Python\\Python37\\site-packages',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\win32',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\win32\\lib',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\Pythonwin',
 'C:\\Users\\rahuldubey\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\IPython\\extensions',
 'C:\\Users\\rahuldubey\\.ipython']
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> platform

x = platform.system()
print(x)
</code></pre>
<pre><code>Windows
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> datetime

x = datetime.datetime.now()
print(x)
</code></pre>
<pre><code><span class="hljs-attribute">2021</span>-<span class="hljs-number">09</span>-<span class="hljs-number">22</span> <span class="hljs-number">21</span>:<span class="hljs-number">54</span>:<span class="hljs-number">34</span>.<span class="hljs-number">483628</span>
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> math

x = math.sqrt(<span class="hljs-number">64</span>)

print(x)
</code></pre>
<pre><code>8<span class="hljs-selector-class">.0</span>
</code></pre><h2 id="python-packages">Python Packages</h2>
<p>When developing a large application, you may end up with many different modules that are difficult to manage. In such a case, you’ll benefit from grouping and organizing your modules. That’s when packages come into play.</p>
<p>Python packages are basically a directory of a collection of modules. Packages allow the hierarchical structure of the module namespace. Just like we organize our files on a hard drive into folders and sub-folders, we can organize our modules into packages and subpackages.</p>
<p>To be considered a package (or subpackage), a directory must contain a file named <strong>init</strong>.py. This file usually includes the initialization code for the corresponding package.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869431132/taAKCZmZC.png" alt="package.png" /></p>
<p> <strong>Requests</strong> : The requests package is an HTTP library for Python. It is built on top of urllib3 (another HTTP client for Python), but has a much simpler and more elegant syntax.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> requests
r = requests.get(<span class="hljs-string">'https://api.spotify.com/'</span>)
r.status_code
</code></pre>
<pre><code><span class="hljs-number">200</span>
</code></pre><p><strong>NumPy</strong> : NumPy is the essential package for scientific and mathematical computing in Python. It introduces n-dimensional arrays and matrices, which are necessary when performing sophisticated mathematical operations. It contains functions that perform basic operations on arrays, such as sorting, shaping, and other mathematical matrix operations. </p>
<p>For example, to create two 2×2 complex matrices and print the sum: </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np

a = np.array([[<span class="hljs-number">1</span>+<span class="hljs-number">2j</span>, <span class="hljs-number">2</span>+<span class="hljs-number">1j</span>], [<span class="hljs-number">3</span>, <span class="hljs-number">4</span>]])
b = np.array([[<span class="hljs-number">5</span>, <span class="hljs-number">6</span>+<span class="hljs-number">6j</span>], [<span class="hljs-number">7</span>, <span class="hljs-number">8</span>+<span class="hljs-number">4j</span>]])
print(a+b)
</code></pre>
<pre><code>[[ <span class="hljs-number">6.</span>+<span class="hljs-number">2.</span>j  <span class="hljs-number">8.</span>+<span class="hljs-number">7.</span>j]
 [<span class="hljs-number">10.</span>+<span class="hljs-number">0.</span>j <span class="hljs-number">12.</span>+<span class="hljs-number">4.</span>j]]
</code></pre><p><strong>Pandas</strong> : The pandas package introduces a novel data structure, the DataFrame, optimized for tabular, multidimensional, and heterogeneous data. Once your data has been converted to this format, the package provides intuitive and practical means to clean and manipulate it. </p>
<p>Manipulations such as groupby, join, merge, concatenate data or filling, replacing and imputing null values can be executed in a single line. The developers of the package have the primary goal of producing the world’s most powerful data analysis and manipulation tool that exists in any language — a daunting task that they may actually achieve. </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

df_1 = pd.DataFrame({<span class="hljs-string">'col1'</span>: [<span class="hljs-number">1</span>,<span class="hljs-number">2</span>], <span class="hljs-string">'col2'</span>: [<span class="hljs-number">3</span>,<span class="hljs-number">4</span>]})

df_2 = pd.DataFrame({<span class="hljs-string">'col3'</span>: [<span class="hljs-number">5</span>,<span class="hljs-number">6</span>], <span class="hljs-string">'col4'</span>: [<span class="hljs-number">7</span>,<span class="hljs-number">8</span>]})
df = pd.concat([df_1,df_2], axis = <span class="hljs-number">1</span>)
df
</code></pre>
<div>

<table>
  <thead>
    <tr>
      <th></th>
      <th>col1</th>
      <th>col2</th>
      <th>col3</th>
      <th>col4</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>0</td>
      <td>1</td>
      <td>3</td>
      <td>5</td>
      <td>7</td>
    </tr>
    <tr>
      <td>1</td>
      <td>2</td>
      <td>4</td>
      <td>6</td>
      <td>8</td>
    </tr>
  </tbody>
</table>
</div>



<h2 id="modules-vs-packages">Modules Vs Packages</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869448863/8hsscah-3.png" alt="mvsp.png" /></p>
<h2 id="numpy-introduction">NumPy - Introduction</h2>
<p>NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting of multidimensional array objects and a collection of routines for processing of array.</p>
<p>Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray was also developed, having some additional functionalities. In 2005, Travis Oliphant created NumPy package by incorporating the features of Numarray into Numeric package. There are many contributors to this open source project.</p>
<p><strong>Operations using NumPy</strong></p>
<p>Using NumPy, a developer can perform the following operations −</p>
<ul>
<li><p>Mathematical and logical operations on arrays.</p>
</li>
<li><p>Fourier transforms and routines for shape manipulation.</p>
</li>
<li><p>Operations related to linear algebra. NumPy has in-built functions for linear algebra and random number generation.</p>
</li>
</ul>
<h3 id="numpy-environment">NumPy - Environment</h3>
<p>Standard Python distribution doesn't come bundled with NumPy module. A lightweight alternative is to install NumPy using popular Python package installer, pip.</p>
<pre><code class="lang-cmd">pip install numpy
</code></pre>
<h3 id="numpy-ndarray-object">NumPy - Ndarray Object</h3>
<p>The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
a = np.array([<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>]) 
<span class="hljs-keyword">print</span> (a)
</code></pre>
<pre><code>[<span class="hljs-number">1</span> <span class="hljs-number">2</span> <span class="hljs-number">3</span>]
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
a = np.array([[<span class="hljs-number">1</span>, <span class="hljs-number">2</span>], [<span class="hljs-number">3</span>, <span class="hljs-number">4</span>]]) 
print( a)
</code></pre>
<pre><code>[[<span class="hljs-number">1</span> <span class="hljs-number">2</span>]
 [<span class="hljs-number">3</span> <span class="hljs-number">4</span>]]
</code></pre><pre><code class="lang-python"><span class="hljs-comment"># dtype parameter </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
a = np.array([<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>], dtype = complex) 
print( a)
</code></pre>
<pre><code>[<span class="hljs-number">1</span><span class="hljs-string">.+0.j</span> <span class="hljs-number">2</span><span class="hljs-string">.+0.j</span> <span class="hljs-number">3</span><span class="hljs-string">.+0.j</span>]
</code></pre><h3 id="numpy-array-attributes">NumPy - Array Attributes</h3>
<p><strong>ndarray.shape</strong>
This array attribute returns a tuple consisting of array dimensions. It can also be used to resize the array.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
a = np.array([[<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>],[<span class="hljs-number">4</span>,<span class="hljs-number">5</span>,<span class="hljs-number">6</span>]]) 
<span class="hljs-keyword">print</span> (a.shape)
</code></pre>
<pre><code><span class="hljs-string">(2,</span> <span class="hljs-number">3</span><span class="hljs-string">)</span>
</code></pre><p><strong>ndarray.ndim</strong>
This array attribute returns the number of array dimensions.</p>
<pre><code class="lang-python"><span class="hljs-comment"># this is one dimensional array </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
a = np.arange(<span class="hljs-number">24</span>) 
print(a.ndim) 

<span class="hljs-comment"># now reshape it </span>
b = a.reshape(<span class="hljs-number">2</span>,<span class="hljs-number">4</span>,<span class="hljs-number">3</span>) 
<span class="hljs-keyword">print</span> (b )
<span class="hljs-comment"># b is having three dimensions</span>
</code></pre>
<pre><code><span class="hljs-number">1</span>
[[[ <span class="hljs-number">0</span>  <span class="hljs-number">1</span>  <span class="hljs-number">2</span>]
  [ <span class="hljs-number">3</span>  <span class="hljs-number">4</span>  <span class="hljs-number">5</span>]
  [ <span class="hljs-number">6</span>  <span class="hljs-number">7</span>  <span class="hljs-number">8</span>]
  [ <span class="hljs-number">9</span> <span class="hljs-number">10</span> <span class="hljs-number">11</span>]]

 [[<span class="hljs-number">12</span> <span class="hljs-number">13</span> <span class="hljs-number">14</span>]
  [<span class="hljs-number">15</span> <span class="hljs-number">16</span> <span class="hljs-number">17</span>]
  [<span class="hljs-number">18</span> <span class="hljs-number">19</span> <span class="hljs-number">20</span>]
  [<span class="hljs-number">21</span> <span class="hljs-number">22</span> <span class="hljs-number">23</span>]]]
</code></pre><p><strong>numpy.itemsize</strong>
This array attribute returns the length of each element of array in bytes.</p>
<pre><code class="lang-python"><span class="hljs-comment"># dtype of array is int8 (1 byte) </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
x = np.array([<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">4</span>,<span class="hljs-number">5</span>], dtype = np.int8) 
<span class="hljs-keyword">print</span> (x.itemsize)
</code></pre>
<pre><code><span class="hljs-number">1</span>
</code></pre><pre><code class="lang-python"><span class="hljs-comment"># dtype of array is now float32 (4 bytes) </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
x = np.array([<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">4</span>,<span class="hljs-number">5</span>], dtype = np.float32) 
<span class="hljs-keyword">print</span> (x.itemsize)
</code></pre>
<pre><code><span class="hljs-number">4</span>
</code></pre><p><strong>numpy.zeros</strong>
Returns a new array of specified size, filled with zeros.</p>
<pre><code class="lang-python"><span class="hljs-comment"># array of five zeros. Default dtype is float </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
x = np.zeros(<span class="hljs-number">5</span>) 
print( x)
</code></pre>
<pre><code>[<span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0</span><span class="hljs-string">.</span> <span class="hljs-number">0</span><span class="hljs-string">.</span>]
</code></pre><p><strong>numpy.ones</strong>
Returns a new array of specified size and type, filled with ones.</p>
<pre><code class="lang-python"><span class="hljs-comment"># array of five ones. Default dtype is float </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
x = np.ones(<span class="hljs-number">5</span>) 
<span class="hljs-keyword">print</span> (x)
</code></pre>
<pre><code>[<span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-number">1</span><span class="hljs-string">.</span> <span class="hljs-number">1</span><span class="hljs-string">.</span>]
</code></pre><p><strong>numpy.asarray</strong>
This function is similar to numpy.array except for the fact that it has fewer parameters. This routine is useful for converting Python sequence into ndarray.</p>
<pre><code class="lang-python"><span class="hljs-comment"># convert list to ndarray </span>
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 

x = [<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>] 
a = np.asarray(x) 
print( a)
</code></pre>
<pre><code>[<span class="hljs-number">1</span> <span class="hljs-number">2</span> <span class="hljs-number">3</span>]
</code></pre><p><strong>numpy.linspace</strong>
This function is similar to arange() function. In this function, instead of step size, the number of evenly spaced values between the interval is specified. </p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np 
x = np.linspace(<span class="hljs-number">10</span>,<span class="hljs-number">20</span>,<span class="hljs-number">5</span>) 
<span class="hljs-keyword">print</span> (x)
</code></pre>
<pre><code>[<span class="hljs-number">10</span><span class="hljs-string">.</span>  <span class="hljs-number">12.5</span> <span class="hljs-number">15</span><span class="hljs-string">.</span>  <span class="hljs-number">17.5</span> <span class="hljs-number">20</span><span class="hljs-string">.</span> ]
</code></pre><p>For more on <a target="_blank" href="https://www.tutorialspoint.com/numpy/numpy_quick_guide.htm">numpy</a></p>
<h2 id="pandas-introduction">Pandas - Introduction</h2>
<p>Pandas is an open-source Python Library providing high-performance data manipulation and analysis tool using its powerful data structures. The name Pandas is derived from the word Panel Data – an Econometrics from Multidimensional data.</p>
<p>Standard Python distribution doesn't come bundled with Pandas module. A lightweight alternative is to install NumPy using popular Python package installer, pip.</p>
<pre><code class="lang-cmd">pip install pandas
</code></pre>
<p>Pandas deals with the following three data structures −</p>
<ul>
<li>Series</li>
<li>DataFrame</li>
<li>Panel</li>
</ul>
<p>These data structures are built on top of Numpy array, which means they are fast.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869464120/e-R84HNHbC.png" alt="pandas.png" /></p>
<p><strong>Series</strong></p>
<p>Series is a one-dimensional array-like structure with homogeneous data. For example, the following series is a collection of integers 10, 23, 56, …</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869479685/QQZVVXUHU.png" alt="series.png" /></p>
<p>Key Points</p>
<ul>
<li>Homogeneous data</li>
<li>Size Immutable</li>
<li>Values of Data Mutable</li>
</ul>
<p><strong>DataFrame</strong></p>
<p>DataFrame is a two-dimensional array with heterogeneous data. For example,</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869492748/TZ8HBIfv_.png" alt="dataframe.png" /></p>
<p>Key Points</p>
<ul>
<li>Heterogeneous data</li>
<li>Size Mutable</li>
<li>Data Mutable</li>
</ul>
<p><strong>Panel</strong></p>
<p>Panel is a three-dimensional data structure with heterogeneous data. It is hard to represent the panel in graphical representation. But a panel can be illustrated as a container of DataFrame.</p>
<p>Key Points</p>
<ul>
<li>Heterogeneous data</li>
<li>Size Mutable</li>
<li>Data Mutable</li>
</ul>
<pre><code class="lang-python"><span class="hljs-comment">#series example</span>

<span class="hljs-comment">#import the pandas library and aliasing as pd</span>
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
data = np.array([<span class="hljs-string">'a'</span>,<span class="hljs-string">'b'</span>,<span class="hljs-string">'c'</span>,<span class="hljs-string">'d'</span>])
s = pd.Series(data)
<span class="hljs-keyword">print</span> (s)
</code></pre>
<pre><code><span class="hljs-number">0</span>    <span class="hljs-string">a</span>
<span class="hljs-number">1</span>    <span class="hljs-string">b</span>
<span class="hljs-number">2</span>    <span class="hljs-string">c</span>
<span class="hljs-number">3</span>    <span class="hljs-string">d</span>
<span class="hljs-attr">dtype:</span> <span class="hljs-string">object</span>
</code></pre><pre><code class="lang-python"><span class="hljs-comment">#import the pandas library and aliasing as pd</span>
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
data = np.array([<span class="hljs-string">'a'</span>,<span class="hljs-string">'b'</span>,<span class="hljs-string">'c'</span>,<span class="hljs-string">'d'</span>])
s = pd.Series(data,index=[<span class="hljs-number">100</span>,<span class="hljs-number">101</span>,<span class="hljs-number">102</span>,<span class="hljs-number">103</span>])
print( s)
</code></pre>
<pre><code><span class="hljs-number">100</span>    <span class="hljs-string">a</span>
<span class="hljs-number">101</span>    <span class="hljs-string">b</span>
<span class="hljs-number">102</span>    <span class="hljs-string">c</span>
<span class="hljs-number">103</span>    <span class="hljs-string">d</span>
<span class="hljs-attr">dtype:</span> <span class="hljs-string">object</span>
</code></pre><pre><code class="lang-python"><span class="hljs-comment">#import the pandas library and aliasing as pd</span>
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
<span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np
data = {<span class="hljs-string">'a'</span> : <span class="hljs-number">0.</span>, <span class="hljs-string">'b'</span> : <span class="hljs-number">1.</span>, <span class="hljs-string">'c'</span> : <span class="hljs-number">2.</span>}
s = pd.Series(data,index=[<span class="hljs-string">'b'</span>,<span class="hljs-string">'c'</span>,<span class="hljs-string">'d'</span>,<span class="hljs-string">'a'</span>])
<span class="hljs-keyword">print</span> (s)
</code></pre>
<pre><code><span class="hljs-attribute">b</span>    <span class="hljs-number">1</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">c</span>    <span class="hljs-number">2</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">d</span>    NaN
<span class="hljs-attribute">a</span>    <span class="hljs-number">0</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">dtype</span>: float<span class="hljs-number">64</span>
</code></pre><pre><code class="lang-python"><span class="hljs-comment">#dataframe examples</span>
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
data = [<span class="hljs-number">1</span>,<span class="hljs-number">2</span>,<span class="hljs-number">3</span>,<span class="hljs-number">4</span>,<span class="hljs-number">5</span>]
df = pd.DataFrame(data)
<span class="hljs-keyword">print</span> (df)
</code></pre>
<pre><code>   <span class="hljs-number">0</span>
<span class="hljs-number">0</span>  <span class="hljs-number">1</span>
<span class="hljs-number">1</span>  <span class="hljs-number">2</span>
<span class="hljs-number">2</span>  <span class="hljs-number">3</span>
<span class="hljs-number">3</span>  <span class="hljs-number">4</span>
<span class="hljs-number">4</span>  <span class="hljs-number">5</span>
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
data = [[<span class="hljs-string">'Alex'</span>,<span class="hljs-number">10</span>],[<span class="hljs-string">'Bob'</span>,<span class="hljs-number">12</span>],[<span class="hljs-string">'Clarke'</span>,<span class="hljs-number">13</span>]]
df = pd.DataFrame(data,columns=[<span class="hljs-string">'Name'</span>,<span class="hljs-string">'Age'</span>])
<span class="hljs-keyword">print</span> (df)
</code></pre>
<pre><code>     <span class="hljs-string">Name</span>  <span class="hljs-string">Age</span>
<span class="hljs-number">0</span>    <span class="hljs-string">Alex</span>   <span class="hljs-number">10</span>
<span class="hljs-number">1</span>     <span class="hljs-string">Bob</span>   <span class="hljs-number">12</span>
<span class="hljs-number">2</span>  <span class="hljs-string">Clarke</span>   <span class="hljs-number">13</span>
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
data = [[<span class="hljs-string">'Alex'</span>,<span class="hljs-number">10</span>],[<span class="hljs-string">'Bob'</span>,<span class="hljs-number">12</span>],[<span class="hljs-string">'Clarke'</span>,<span class="hljs-number">13</span>]]
df = pd.DataFrame(data,columns=[<span class="hljs-string">'Name'</span>,<span class="hljs-string">'Age'</span>],dtype=float)
<span class="hljs-keyword">print</span> (df)
</code></pre>
<pre><code>     <span class="hljs-attribute">Name</span>   Age
<span class="hljs-attribute">0</span>    Alex  <span class="hljs-number">10</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">1</span>     Bob  <span class="hljs-number">12</span>.<span class="hljs-number">0</span>
<span class="hljs-attribute">2</span>  Clarke  <span class="hljs-number">13</span>.<span class="hljs-number">0</span>
</code></pre><pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
data = [{<span class="hljs-string">'a'</span>: <span class="hljs-number">1</span>, <span class="hljs-string">'b'</span>: <span class="hljs-number">2</span>},{<span class="hljs-string">'a'</span>: <span class="hljs-number">5</span>, <span class="hljs-string">'b'</span>: <span class="hljs-number">10</span>, <span class="hljs-string">'c'</span>: <span class="hljs-number">20</span>}]
df = pd.DataFrame(data)
<span class="hljs-keyword">print</span> (df)
</code></pre>
<pre><code>   <span class="hljs-string">a</span>   <span class="hljs-string">b</span>     <span class="hljs-string">c</span>
<span class="hljs-number">0</span>  <span class="hljs-number">1</span>   <span class="hljs-number">2</span>   <span class="hljs-string">NaN</span>
<span class="hljs-number">1</span>  <span class="hljs-number">5</span>  <span class="hljs-number">10</span>  <span class="hljs-number">20.0</span>
</code></pre><p>For more on <a target="_blank" href="https://www.tutorialspoint.com/python_pandas/python_pandas_quick_guide.htm">pandas</a></p>
]]></content:encoded></item><item><title><![CDATA[Built-in Functions in Python]]></title><description><![CDATA[The Python interpreter has a number of functions and types built into it that are always available. They are listed here in alphabetical order.

Python abs()
The abs() function returns the absolute value of the given number. If the number is a comple...]]></description><link>https://deviloper.in/built-in-functions-in-python</link><guid isPermaLink="true">https://deviloper.in/built-in-functions-in-python</guid><category><![CDATA[Python]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Tue, 28 Sep 2021 22:48:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869265775/5rrzS73X4.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Python interpreter has a number of functions and types built into it that are always available. They are listed here in alphabetical order.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869220136/HoEi-mIdR.png" alt="built-in.png" /></p>
<h2 id="python-abs">Python abs()</h2>
<p>The abs() function returns the absolute value of the given number. If the number is a complex number, abs() returns its magnitude.</p>
<pre><code class="lang-python">number = <span class="hljs-number">-20</span>

absolute_number = abs(number)
print(absolute_number)

<span class="hljs-comment"># Output: 20</span>
</code></pre>
<pre><code><span class="hljs-number">20</span>
</code></pre><h2 id="python-all">Python all()</h2>
<p>The all() function returns True if all elements in the given iterable are true. If not, it returns False.</p>
<pre><code class="lang-python">boolean_list = [<span class="hljs-string">'True'</span>, <span class="hljs-string">'True'</span>, <span class="hljs-string">'True'</span>]

<span class="hljs-comment"># check if all elements are true</span>
result = all(boolean_list)
print(result)

<span class="hljs-comment"># Output: True</span>
</code></pre>
<pre><code><span class="hljs-literal">True</span>
</code></pre><h2 id="python-any">Python any()</h2>
<p>The any() function returns True if any element of an iterable is True. If not, it returns False.</p>
<pre><code class="lang-python">boolean_list = [<span class="hljs-string">'True'</span>, <span class="hljs-string">'False'</span>, <span class="hljs-string">'True'</span>]

<span class="hljs-comment"># check if any element is true</span>
result = any(boolean_list)
print(result)

<span class="hljs-comment"># Output: True</span>
</code></pre>
<pre><code><span class="hljs-literal">True</span>
</code></pre><h2 id="python-ascii">Python ascii()</h2>
<p>The ascii() method returns a string containing a printable representation of an object. It escapes the non-ASCII characters in the string using \x, \u or \U escapes.</p>
<pre><code class="lang-python">otherText = <span class="hljs-string">'Pythön is interesting'</span>
print(ascii(otherText))
</code></pre>
<pre><code><span class="hljs-string">'Pyth\xf6n is interesting'</span>
</code></pre><h2 id="python-bin">Python bin()</h2>
<p>The bin() method converts and returns the binary equivalent string of a given integer. If the parameter isn't an integer, it has to implement <strong>index</strong>() method to return an integer.</p>
<pre><code class="lang-python">number = <span class="hljs-number">5</span>
print(<span class="hljs-string">'The binary equivalent of 5 is:'</span>, bin(number))
</code></pre>
<pre><code><span class="hljs-attribute">The</span> binary equivalent of <span class="hljs-number">5</span> is: <span class="hljs-number">0</span>b<span class="hljs-number">101</span>
</code></pre><h2 id="python-enumerate">Python enumerate()</h2>
<p>The enumerate() method adds a counter to an iterable and returns it (the enumerate object).</p>
<pre><code class="lang-python">languages = [<span class="hljs-string">'Python'</span>, <span class="hljs-string">'Java'</span>, <span class="hljs-string">'JavaScript'</span>]

enumerate_prime = enumerate(languages)

<span class="hljs-comment"># convert enumerate object to list</span>
print(list(enumerate_prime))

<span class="hljs-comment"># Output: [(0, 'Python'), (1, 'Java'), (2, 'JavaScript')]</span>
</code></pre>
<pre><code>[(<span class="hljs-number">0</span>, 'Python'), (<span class="hljs-number">1</span>, 'Java'), (<span class="hljs-number">2</span>, 'JavaScript')]
</code></pre><h2 id="python-filter">Python filter()</h2>
<p>The filter() function extracts elements from an iterable (list, tuple etc.) for which a function returns True.</p>
<pre><code class="lang-python">numbers = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">5</span>, <span class="hljs-number">6</span>, <span class="hljs-number">7</span>, <span class="hljs-number">8</span>, <span class="hljs-number">9</span>, <span class="hljs-number">10</span>]

<span class="hljs-comment"># returns True if number is even</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">check_even</span>(<span class="hljs-params">number</span>):</span>
    <span class="hljs-keyword">if</span> number % <span class="hljs-number">2</span> == <span class="hljs-number">0</span>:
          <span class="hljs-keyword">return</span> <span class="hljs-literal">True</span>  

    <span class="hljs-keyword">return</span> <span class="hljs-literal">False</span>

<span class="hljs-comment"># Extract elements from the numbers list for which check_even() returns True</span>
even_numbers_iterator = filter(check_even, numbers)

<span class="hljs-comment"># converting to list</span>
even_numbers = list(even_numbers_iterator)

print(even_numbers)

<span class="hljs-comment"># Output: [2, 4, 6, 8, 10]</span>
</code></pre>
<pre><code>[<span class="hljs-number">2</span>, <span class="hljs-number">4</span>, <span class="hljs-number">6</span>, <span class="hljs-number">8</span>, <span class="hljs-number">10</span>]
</code></pre><h2 id="python-map">Python map()</h2>
<p>The map() function applies a given function to each item of an iterable (list, tuple etc.) and returns an iterator.</p>
<pre><code class="lang-python">numbers = [<span class="hljs-number">2</span>, <span class="hljs-number">4</span>, <span class="hljs-number">6</span>, <span class="hljs-number">8</span>, <span class="hljs-number">10</span>]

<span class="hljs-comment"># returns square of a number</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">square</span>(<span class="hljs-params">number</span>):</span>
    <span class="hljs-keyword">return</span> number * number

<span class="hljs-comment"># apply square() function to each item of the numbers list</span>
squared_numbers_iterator = map(square, numbers)

<span class="hljs-comment"># converting to list</span>
squared_numbers = list(squared_numbers_iterator)
print(squared_numbers)

<span class="hljs-comment"># Output: [4, 16, 36, 64, 100]</span>
</code></pre>
<pre><code>[<span class="hljs-number">4</span>, <span class="hljs-number">16</span>, <span class="hljs-number">36</span>, <span class="hljs-number">64</span>, <span class="hljs-number">100</span>]
</code></pre><h2 id="python-pow">Python pow()</h2>
<p>The pow() function returns the power of a number.</p>
<pre><code class="lang-python"><span class="hljs-comment"># positive x, positive y (x**y)</span>
print(pow(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>))    <span class="hljs-comment"># 4</span>

<span class="hljs-comment"># negative x, positive y</span>
print(pow(<span class="hljs-number">-2</span>, <span class="hljs-number">2</span>))    <span class="hljs-comment"># 4  </span>

<span class="hljs-comment"># positive x, negative y</span>
print(pow(<span class="hljs-number">2</span>, <span class="hljs-number">-2</span>))    <span class="hljs-comment"># 0.25</span>

<span class="hljs-comment"># negative x, negative y</span>
print(pow(<span class="hljs-number">-2</span>, <span class="hljs-number">-2</span>))    <span class="hljs-comment"># 0.25</span>
</code></pre>
<pre><code><span class="hljs-number">4</span>
<span class="hljs-number">4</span>
<span class="hljs-number">0.25</span>
<span class="hljs-number">0.25</span>
</code></pre><h2 id="python-reversed">Python reversed()</h2>
<p>The reversed() function returns the reversed iterator of the given sequence.</p>
<pre><code class="lang-python"><span class="hljs-comment"># for string</span>
seq_string = <span class="hljs-string">'Python'</span>
print(list(reversed(seq_string)))

<span class="hljs-comment"># for tuple</span>
seq_tuple = (<span class="hljs-string">'P'</span>, <span class="hljs-string">'y'</span>, <span class="hljs-string">'t'</span>, <span class="hljs-string">'h'</span>, <span class="hljs-string">'o'</span>, <span class="hljs-string">'n'</span>)
print(list(reversed(seq_tuple)))

<span class="hljs-comment"># for range</span>
seq_range = range(<span class="hljs-number">5</span>, <span class="hljs-number">9</span>)
print(list(reversed(seq_range)))

<span class="hljs-comment"># for list</span>
seq_list = [<span class="hljs-number">1</span>, <span class="hljs-number">2</span>, <span class="hljs-number">4</span>, <span class="hljs-number">3</span>, <span class="hljs-number">5</span>]
print(list(reversed(seq_list)))
</code></pre>
<pre><code>['n', 'o', 'h', 't', 'y', 'P']
['n', 'o', 'h', 't', 'y', 'P']
[<span class="hljs-number">8</span>, <span class="hljs-number">7</span>, <span class="hljs-number">6</span>, <span class="hljs-number">5</span>]
[<span class="hljs-number">5</span>, <span class="hljs-number">3</span>, <span class="hljs-number">4</span>, <span class="hljs-number">2</span>, <span class="hljs-number">1</span>]
</code></pre><h2 id="python-round">Python round()</h2>
<p>The round() function returns a floating-point number rounded to the specified number of decimals.</p>
<pre><code class="lang-python">number = <span class="hljs-number">13.46</span>

<span class="hljs-comment"># round the number</span>
rounded_number = round(number)
print(rounded_number)

<span class="hljs-comment"># Output: 13</span>
</code></pre>
<pre><code><span class="hljs-number">13</span>
</code></pre><h2 id="python-sorted">Python sorted()</h2>
<p>The sorted() function sorts the elements of a given iterable in a specific order (ascending or descending) and returns it as a list.</p>
<pre><code class="lang-python">numbers = [<span class="hljs-number">4</span>, <span class="hljs-number">2</span>, <span class="hljs-number">12</span>, <span class="hljs-number">8</span>]

sorted_numbers = sorted(numbers)
print(sorted_numbers)

<span class="hljs-comment"># Output: [2, 4, 8, 12]</span>
</code></pre>
<pre><code>[<span class="hljs-number">2</span>, <span class="hljs-number">4</span>, <span class="hljs-number">8</span>, <span class="hljs-number">12</span>]
</code></pre><h2 id="python-zip">Python zip()</h2>
<p>The zip() function takes iterables (can be zero or more), aggregates them in a tuple, and returns it.</p>
<pre><code class="lang-python">languages = [<span class="hljs-string">'Java'</span>, <span class="hljs-string">'Python'</span>, <span class="hljs-string">'JavaScript'</span>]
versions = [<span class="hljs-number">14</span>, <span class="hljs-number">3</span>, <span class="hljs-number">6</span>]

result = zip(languages, versions)
print(list(result))
</code></pre>
<pre><code>[('Java', <span class="hljs-number">14</span>), ('Python', <span class="hljs-number">3</span>), ('JavaScript', <span class="hljs-number">6</span>)]
</code></pre><p><strong>Note: Find more on <a target="_blank" href="https://docs.python.org/3/library/functions.html">Built-in Function</a></strong></p>
]]></content:encoded></item><item><title><![CDATA[File Handling in Python]]></title><description><![CDATA[Python too supports file handling and allows users to handle files i.e., to read and write files, along with many other file handling options, to operate on files.
Working of open() function
We use open () function in Python to open a file in read or...]]></description><link>https://deviloper.in/file-handling-in-python</link><guid isPermaLink="true">https://deviloper.in/file-handling-in-python</guid><category><![CDATA[Python]]></category><category><![CDATA[python beginner]]></category><dc:creator><![CDATA[Rahul Dubey]]></dc:creator><pubDate>Tue, 28 Sep 2021 22:45:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1632869090374/y_YTWTHxG.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Python too supports file handling and allows users to handle files i.e., to read and write files, along with many other file handling options, to operate on files.</p>
<h3 id="working-of-open-function">Working of open() function</h3>
<p>We use <strong>open ()</strong> function in Python to open a file in read or write mode. As explained above, open ( ) will return a file object. To return a file object we use open() function along with two arguments, that accepts file name and the mode, whether to read or write. So, the syntax being: open(filename, mode). There are three kinds of mode, that Python provides and how files can be opened:</p>
<ul>
<li>“ <strong>r</strong> “, for reading.</li>
<li>“ <strong>w</strong> “, for writing.</li>
<li>“ <strong>a</strong> “, for appending.</li>
<li>“ <strong>r+</strong> “, for both reading and writing</li>
</ul>
<p>One must keep in mind that the mode argument is not mandatory. If not passed, then Python will assume it to be “ <strong>r</strong> ” by default. Let’s look at this program and try to analyze how the read mode works:</p>
<pre><code class="lang-python"><span class="hljs-comment"># a file named "geek", will be opened with the reading mode.</span>
file = open(<span class="hljs-string">'test.txt'</span>, <span class="hljs-string">'r'</span>)
<span class="hljs-comment"># This will print every line one by one in the file</span>
<span class="hljs-keyword">for</span> each <span class="hljs-keyword">in</span> file:
    <span class="hljs-keyword">print</span> (each)
</code></pre>
<pre><code> <span class="hljs-keyword">this</span> <span class="hljs-keyword">is</span> test file <span class="hljs-keyword">for</span> understanding 

the concept of file handling
</code></pre><h3 id="working-of-read-mode">Working of read() mode</h3>
<p>There is more than one way to read a file in Python. If you need to extract a string that contains all characters in the file then we can use <strong>file.read()</strong>. The full code would work like this:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Python code to illustrate read() mode</span>
file = open(<span class="hljs-string">"test.txt"</span>, <span class="hljs-string">"r"</span>)
<span class="hljs-keyword">print</span> (file.read())
</code></pre>
<pre><code> <span class="hljs-keyword">this</span> <span class="hljs-keyword">is</span> test file <span class="hljs-keyword">for</span> understanding 
the concept of file handling
</code></pre><pre><code class="lang-python"><span class="hljs-comment"># Python code to illustrate read() mode character wise</span>
file = open(<span class="hljs-string">"test.txt"</span>, <span class="hljs-string">"r"</span>)
<span class="hljs-keyword">print</span> (file.read(<span class="hljs-number">5</span>))
</code></pre>
<pre><code> <span class="hljs-keyword">this</span>
</code></pre><h3 id="creating-a-file-using-write-mode">Creating a file using write() mode</h3>
<p>Let’s see how to create a file and how write mode works:
To manipulate the file, write the following in your Python environment:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Python code to create a file</span>
file = open(<span class="hljs-string">'geek.txt'</span>,<span class="hljs-string">'w'</span>)
file.write(<span class="hljs-string">"This is the write command"</span>)
file.write(<span class="hljs-string">"It allows us to write in a particular file"</span>)
file.close()
</code></pre>
<h3 id="working-of-append-mode">Working of append() mode</h3>
<p>Let’s see how the append mode works:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Python code to illustrate append() mode</span>
file = open(<span class="hljs-string">'geek.txt'</span>,<span class="hljs-string">'a'</span>)
file.write(<span class="hljs-string">"This will add this line"</span>)
file.close()
</code></pre>
<h3 id="using-write-along-with-with-function">Using write along with with() function</h3>
<p>We can also use write function along with with() function:</p>
<pre><code class="lang-python">
<span class="hljs-comment"># Python code to illustrate with() alongwith write()</span>
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"file.txt"</span>, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> f: 
    f.write(<span class="hljs-string">"Hello World!!!"</span>)
</code></pre>
<h3 id="split-using-file-handling">split() using file handling</h3>
<p>We can also split lines using file handling in Python. This splits the variable when space is encountered. You can also split using any characters as we wish. Here is the code:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Python code to illustrate split() function</span>
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"file.txt"</span>, <span class="hljs-string">"r"</span>) <span class="hljs-keyword">as</span> file:
    data = file.readlines()
    <span class="hljs-keyword">for</span> line <span class="hljs-keyword">in</span> data:
        word = line.split()
        <span class="hljs-keyword">print</span> (word)
</code></pre>
<pre><code>['Hello', 'World!!!']
</code></pre><h3 id="working-with-binary">Working With Binary</h3>
<pre><code class="lang-python">f1=open(<span class="hljs-string">"python.jpg"</span>, <span class="hljs-string">"rb"</span>)
f2=open(<span class="hljs-string">"newpic.jpg"</span>, <span class="hljs-string">"wb"</span>)
bytes=f1.read()
f2.write(bytes)
print(<span class="hljs-string">"New Image is available with the name: newpic.jpg"</span>)
</code></pre>
<pre><code><span class="hljs-built_in">New</span> Image <span class="hljs-keyword">is</span> available <span class="hljs-keyword">with</span> the <span class="hljs-type">name</span>: newpic.jpg
</code></pre><h2 id="python-read-and-write-file-json-xml-csv-xlsx-sample-code">Python Read and Write File (JSON, XML, CSV, xlsx) – Sample Code</h2>
<h3 id="json-files-read-and-write-from-python">JSON Files – Read and Write from Python :</h3>
<ul>
<li>Write JSON Objects into Python :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-comment"># Converting the JSON objects to print in Python</span>
<span class="hljs-keyword">import</span> json
<span class="hljs-comment"># some JSON:</span>
x =  <span class="hljs-string">'{ "city":"Washington", "population":300000, "country":"US"}'</span>
<span class="hljs-comment"># parse x:</span>
y = json.loads(x)
<span class="hljs-comment"># the result is a Python dictionary:</span>
print(y[<span class="hljs-string">"city"</span>])
</code></pre>
<pre><code>Washington
</code></pre><ul>
<li>Converting Python Objects to JSON :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json
<span class="hljs-comment"># Python Dictionary object:</span>
x = {
  <span class="hljs-string">"city"</span>:<span class="hljs-string">"Washington"</span>, 
  <span class="hljs-string">"population"</span>:<span class="hljs-number">300000</span>, 
  <span class="hljs-string">"country"</span>:<span class="hljs-string">"US"</span>
}

<span class="hljs-comment"># convert into JSON:</span>
y = json.dumps(x)

<span class="hljs-comment"># the result is a JSON string:</span>
print(y)
</code></pre>
<pre><code>{<span class="hljs-attr">"city"</span>: <span class="hljs-string">"Washington"</span>, <span class="hljs-attr">"population"</span>: <span class="hljs-number">300000</span>, <span class="hljs-attr">"country"</span>: <span class="hljs-string">"US"</span>}
</code></pre><ul>
<li>Read JSON File :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

<span class="hljs-keyword">with</span> open(<span class="hljs-string">'file.json'</span>) <span class="hljs-keyword">as</span> f:
    data= json.loads(f.read())
print(data)

<span class="hljs-comment"># We can do the same thing with pandas</span>
data_df = pd.read_json(<span class="hljs-string">'file.json'</span>, orient=<span class="hljs-string">'records'</span>)
print(data_df)

<span class="hljs-comment"># To Pretty Print</span>
print(json.dumps(data, indent = <span class="hljs-number">4</span>, sort_keys=<span class="hljs-literal">True</span>))
</code></pre>
<pre><code>{<span class="hljs-attr">'employee':</span> {<span class="hljs-attr">'name':</span> <span class="hljs-string">'sonoo'</span>, <span class="hljs-attr">'salary':</span> <span class="hljs-number">56000</span>, <span class="hljs-attr">'married':</span> <span class="hljs-literal">True</span>}}
        <span class="hljs-string">employee</span>
<span class="hljs-string">married</span>     <span class="hljs-literal">True</span>
<span class="hljs-string">name</span>       <span class="hljs-string">sonoo</span>
<span class="hljs-string">salary</span>     <span class="hljs-number">56000</span>
{
    <span class="hljs-attr">"employee":</span> {
        <span class="hljs-attr">"married":</span> <span class="hljs-literal">true</span>,
        <span class="hljs-attr">"name":</span> <span class="hljs-string">"sonoo"</span>,
        <span class="hljs-attr">"salary":</span> <span class="hljs-number">56000</span>
    }
}
</code></pre><ul>
<li>Write JSON Data to a File :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

data_dict = {
  <span class="hljs-string">"city"</span>:<span class="hljs-string">"Washington"</span>, 
  <span class="hljs-string">"population"</span>:<span class="hljs-number">300000</span>, 
  <span class="hljs-string">"country"</span>:<span class="hljs-string">"US"</span>
}

<span class="hljs-keyword">with</span> open(<span class="hljs-string">'file.txt'</span>, <span class="hljs-string">'w'</span>) <span class="hljs-keyword">as</span> json_file:
    json.dump(data_dict, json_file)

<span class="hljs-comment"># We can do the same thing with pandas</span>
df= pd.DataFrame.from_dict(data_dict, orient=<span class="hljs-string">'index'</span>)
df.to_json(<span class="hljs-string">'file.json'</span>, orient=<span class="hljs-string">'records'</span>)
</code></pre>
<h3 id="xml-files-read-and-write-from-python">XML Files – Read and Write from Python :</h3>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">nodedetails</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">name</span>&gt;</span>SERVER-1<span class="hljs-tag">&lt;/<span class="hljs-name">name</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">description</span>/&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">numExecutors</span>&gt;</span>SOME_NO_XX<span class="hljs-tag">&lt;/<span class="hljs-name">numExecutors</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">mode</span>&gt;</span>SOME_MODE_XX<span class="hljs-tag">&lt;/<span class="hljs-name">mode</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">launcher</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"xxxxx"</span> <span class="hljs-attr">plugin</span>=<span class="hljs-string">"xxxxx"</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">hostname</span>&gt;</span>128.0.0.1<span class="hljs-tag">&lt;/<span class="hljs-name">hostname</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">portno</span>&gt;</span>9900<span class="hljs-tag">&lt;/<span class="hljs-name">portno</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">credentialsId</span>&gt;</span>TESTID<span class="hljs-tag">&lt;/<span class="hljs-name">credentialsId</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">maxNumRetries</span>&gt;</span>3<span class="hljs-tag">&lt;/<span class="hljs-name">maxNumRetries</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">launcher</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">label</span>&gt;</span>somelabel<span class="hljs-tag">&lt;/<span class="hljs-name">label</span>&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">nodedetails</span>&gt;</span>
</code></pre>
<ul>
<li>Read from the XML File using ElementTree :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> xml.etree.ElementTree <span class="hljs-keyword">as</span> ET
tree = ET.parse(<span class="hljs-string">"TEST.xml"</span>)
root = tree.getroot()

<span class="hljs-keyword">for</span> item <span class="hljs-keyword">in</span> root.iter(<span class="hljs-string">'nodedetails'</span>):
    <span class="hljs-keyword">for</span> name <span class="hljs-keyword">in</span> item.iter(<span class="hljs-string">"name"</span>):
        <span class="hljs-keyword">print</span> (name.text)
    <span class="hljs-keyword">for</span> portno <span class="hljs-keyword">in</span> item.iter(<span class="hljs-string">"portno"</span>):
        <span class="hljs-keyword">print</span> (portno.text)
</code></pre>
<pre><code><span class="hljs-keyword">SERVER</span><span class="hljs-number">-1</span>
<span class="hljs-number">9900</span>
</code></pre><ul>
<li>Read from any XML File using Beautifulsoup :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> bs4 <span class="hljs-keyword">import</span> BeautifulSoup

<span class="hljs-keyword">with</span> open(<span class="hljs-string">'TEST.xml'</span>, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f: 
    data = f.read()

<span class="hljs-comment"># Beautifulsoup will parse the data </span>
All_data = BeautifulSoup(data, <span class="hljs-string">"xml"</span>)

<span class="hljs-comment"># Finding all instances of any tag  </span>
tag_data = All_data.find_all(<span class="hljs-string">'hostname'</span>)
print(tag_data)
</code></pre>
<pre><code>[<span class="hljs-tag">&lt;<span class="hljs-name">hostname</span>&gt;</span>128.0.0.1<span class="hljs-tag">&lt;/<span class="hljs-name">hostname</span>&gt;</span>]
</code></pre><ul>
<li>Writing an XML File using ElementTree :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> xml.etree.ElementTree <span class="hljs-keyword">as</span> ET

<span class="hljs-comment"># Parent (root) tag </span>
data1 = ET.Element(<span class="hljs-string">'parents'</span>)

<span class="hljs-comment"># Adding a subtag named `children`  inside our root tag "parents"</span>
data2 = ET.SubElement(data1, <span class="hljs-string">'children'</span>)

<span class="hljs-comment"># Adding subtags under the `children` subtag </span>
grandChild1 = ET.SubElement(data2, <span class="hljs-string">'E4'</span>) 
grandChild2 = ET.SubElement(data2, <span class="hljs-string">'D4'</span>)

<span class="hljs-comment"># Adding attributes to the tags under </span>
<span class="hljs-comment"># `items` </span>
grandChild1.set(<span class="hljs-string">'card'</span>, <span class="hljs-string">'Credit_Card'</span>) 
grandChild2.set(<span class="hljs-string">'card'</span>, <span class="hljs-string">'Debit_Card'</span>)

<span class="hljs-comment"># Adding text to grandchildren subtags</span>
grandChild1.text = <span class="hljs-string">"Only Credit Cards Accepted"</span>
grandChild2.text = <span class="hljs-string">"Only Debit Cards Accepted"</span>

<span class="hljs-comment"># Convert xml data to byte object to write into filestream </span>
data_xml = ET.tostring(data1)

<span class="hljs-comment"># Write to a file with operation mode `wb` (write + binary) </span>
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"OUTPUT.xml"</span>, <span class="hljs-string">"wb"</span>) <span class="hljs-keyword">as</span> f: 
    f.write(data_xml)

<span class="hljs-keyword">from</span> bs4 <span class="hljs-keyword">import</span> BeautifulSoup

<span class="hljs-keyword">with</span> open(<span class="hljs-string">'OUTPUT.xml'</span>, <span class="hljs-string">'r'</span>) <span class="hljs-keyword">as</span> f: 
    data = f.read()

<span class="hljs-comment"># Beautifulsoup will parse the data </span>
All_data = BeautifulSoup(data, <span class="hljs-string">"xml"</span>)
print(All_data)
</code></pre>
<pre><code><span class="hljs-meta">&lt;?xml version="1.0" encoding="utf-8"?&gt;</span>
<span class="hljs-tag">&lt;<span class="hljs-name">parents</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">children</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">E4</span> <span class="hljs-attr">card</span>=<span class="hljs-string">"Credit_Card"</span>&gt;</span>Only Credit Cards Accepted<span class="hljs-tag">&lt;/<span class="hljs-name">E4</span>&gt;</span><span class="hljs-tag">&lt;<span class="hljs-name">D4</span> <span class="hljs-attr">card</span>=<span class="hljs-string">"Debit_Card"</span>&gt;</span>Only Debit Cards Accepted<span class="hljs-tag">&lt;/<span class="hljs-name">D4</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">children</span>&gt;</span><span class="hljs-tag">&lt;/<span class="hljs-name">parents</span>&gt;</span>
</code></pre><ul>
<li>Writing to an XML using Beautifulsoup :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-comment">#Open file</span>
soup = BeautifulSoup(open(<span class="hljs-string">'TEST.xml'</span>),<span class="hljs-string">'xml'</span>)

modified_xml_content = <span class="hljs-string">'''&lt;note&gt;
&lt;to&gt;Tove&lt;/to&gt;
&lt;from&gt;Jani&lt;/from&gt;
&lt;heading&gt;Reminder&lt;/heading&gt;
&lt;body&gt;Don't forget me this weekend!&lt;/body&gt;
&lt;/note&gt;'''</span>

<span class="hljs-comment">#Write to a file</span>
f = open(<span class="hljs-string">'OUTPUT.xml'</span>, <span class="hljs-string">"w"</span>)
f.write(str(modified_xml_content))
f.close()
</code></pre>
<h3 id="csv-files-read-and-write-from-python">CSV Files – Read and Write from Python :</h3>
<ul>
<li>Read from CSV File using Native Python Lib :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> csv
rows=[]
<span class="hljs-keyword">with</span> open(<span class="hljs-string">'sample4.csv'</span>) <span class="hljs-keyword">as</span> inputfile:
    csvreader = csv.reader(inputfile, delimiter=<span class="hljs-string">','</span>)
    <span class="hljs-comment"># Extracting each data row one by one </span>
    <span class="hljs-keyword">for</span> row <span class="hljs-keyword">in</span> csvreader: 
        rows.append(row)

<span class="hljs-comment"># Printing out the first 5 rows </span>
<span class="hljs-keyword">for</span> row <span class="hljs-keyword">in</span> rows[:<span class="hljs-number">5</span>]: 
    print(row)
</code></pre>
<pre><code>['Game Number', ' <span class="hljs-string">"Game Length"</span>']
['<span class="hljs-number">1</span>', ' <span class="hljs-number">30</span>']
['<span class="hljs-number">2</span>', ' <span class="hljs-number">29</span>']
['<span class="hljs-number">3</span>', ' <span class="hljs-number">31</span>']
['<span class="hljs-number">4</span>', ' <span class="hljs-number">16</span>']
</code></pre><ul>
<li>Read from CSV File using Pandas Lib :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd
data = pd.read_csv(<span class="hljs-string">'sample4.csv'</span>)
<span class="hljs-comment"># Display first 10 Lines of data</span>
data.head(<span class="hljs-number">10</span>)
</code></pre>
<div>

<table>
  <thead>
    <tr>
      <th></th>
      <th>Game Number</th>
      <th>"Game Length"</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>0</td>
      <td>1</td>
      <td>30</td>
    </tr>
    <tr>
      <td>1</td>
      <td>2</td>
      <td>29</td>
    </tr>
    <tr>
      <td>2</td>
      <td>3</td>
      <td>31</td>
    </tr>
    <tr>
      <td>3</td>
      <td>4</td>
      <td>16</td>
    </tr>
    <tr>
      <td>4</td>
      <td>5</td>
      <td>24</td>
    </tr>
    <tr>
      <td>5</td>
      <td>6</td>
      <td>29</td>
    </tr>
    <tr>
      <td>6</td>
      <td>7</td>
      <td>28</td>
    </tr>
    <tr>
      <td>7</td>
      <td>8</td>
      <td>117</td>
    </tr>
    <tr>
      <td>8</td>
      <td>9</td>
      <td>42</td>
    </tr>
    <tr>
      <td>9</td>
      <td>10</td>
      <td>23</td>
    </tr>
  </tbody>
</table>
</div>



<ul>
<li>Write to a CSV File using Native Python lib :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> csv

data = [ [<span class="hljs-string">'Emily'</span>, <span class="hljs-string">'12'</span>, <span class="hljs-string">'18'</span>, <span class="hljs-string">'112'</span>], 
         [<span class="hljs-string">'Katie'</span>, <span class="hljs-string">'8'</span>, <span class="hljs-string">'24'</span>, <span class="hljs-string">'96'</span>], 
         [<span class="hljs-string">'John'</span>, <span class="hljs-string">'16'</span>, <span class="hljs-string">'9'</span>, <span class="hljs-string">'101'</span>], 
         [<span class="hljs-string">'Mike'</span>, <span class="hljs-string">'3'</span>, <span class="hljs-string">'14'</span>, <span class="hljs-string">'82'</span>]]

writer = csv.writer(open(<span class="hljs-string">"OUTPUTFILE.csv"</span>, <span class="hljs-string">'w'</span>))
<span class="hljs-keyword">for</span> row <span class="hljs-keyword">in</span> data:
    print(row)
    writer.writerow(row)
</code></pre>
<pre><code>['Emily', '<span class="hljs-number">12</span>', '<span class="hljs-number">18</span>', '<span class="hljs-number">112</span>']
['Katie', '<span class="hljs-number">8</span>', '<span class="hljs-number">24</span>', '<span class="hljs-number">96</span>']
['John', '<span class="hljs-number">16</span>', '<span class="hljs-number">9</span>', '<span class="hljs-number">101</span>']
['Mike', '<span class="hljs-number">3</span>', '<span class="hljs-number">14</span>', '<span class="hljs-number">82</span>']
</code></pre><ul>
<li>Write to a CSV File using pandas :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

data_dict = {
  <span class="hljs-string">"city"</span>:<span class="hljs-string">"Washington"</span>, 
  <span class="hljs-string">"population"</span>:<span class="hljs-number">300000</span>, 
  <span class="hljs-string">"country"</span>:<span class="hljs-string">"US"</span>
}

df= pd.DataFrame.from_dict(data_dict, orient=<span class="hljs-string">'index'</span>)
df.to_csv(<span class="hljs-string">'OUTPUTFILE.csv'</span>)
</code></pre>
<h2 id="xlsx-files-read-and-write-from-python">xlsx Files – Read and Write from Python :</h2>
<p>Write to a xlsx File using pandas :</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

data_dict = {
  <span class="hljs-string">"city"</span>:<span class="hljs-string">"Washington"</span>, 
  <span class="hljs-string">"population"</span>:<span class="hljs-number">300000</span>, 
  <span class="hljs-string">"country"</span>:<span class="hljs-string">"US"</span>
}

df= pd.DataFrame.from_dict(data_dict, orient=<span class="hljs-string">'index'</span>)
df.to_excel(<span class="hljs-string">'OUTPUTFILE.xlsx'</span>)
</code></pre>
<ul>
<li>Read from xlsx File using Pandas Lib :</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> pandas <span class="hljs-keyword">as</span> pd

df=pd.read_excel(<span class="hljs-string">'OUTPUTFILE.xlsx'</span>)
print(df)
</code></pre>
<pre><code>   <span class="hljs-attr">Unnamed:</span> <span class="hljs-number">0</span>           <span class="hljs-number">0</span>
<span class="hljs-number">0</span>        <span class="hljs-string">city</span>  <span class="hljs-string">Washington</span>
<span class="hljs-number">1</span>  <span class="hljs-string">population</span>      <span class="hljs-number">300000</span>
<span class="hljs-number">2</span>     <span class="hljs-string">country</span>          <span class="hljs-string">US</span>
</code></pre>]]></content:encoded></item></channel></rss>