tag:www.digitalocean.com,2005:/community/tags/python-advanced.atomDigitalOcean Community | Python Advanced tutorials, questions and resources2022-09-27T15:09:26.431Z<![CDATA[How to calculate BLEU Score in Python?]]>tag:www.digitalocean.com,2005:/community/tutorials/bleu-score-in-python2022-08-03T19:32:48.695Z {}'.format(sentence_bleu(reference, candidate)))
```
Output :
```markup
BLEU score -> 1.0
```
We get a perfect score of 1 as the candidate sentence belongs to the reference set. Let's try another one.
```python
candidate = 'it is a dog'.split()
print('BLEU score -> {}'.format(sentence_bleu(reference, candidate)))
```
Output :
```markup
BLEU score -> 0.8408964152537145
```
We have the sentence in our reference set, but it isn't an exact match. This is why we get a 0.84 score.
### 3\. Complete Code for Implementing BLEU Score in Python
Here's the complete code from this section.
```python
from nltk.translate.bleu_score import sentence_bleu
reference = [
'this is a dog'.split(),
'it is dog'.split(),
'dog it is'.split(),
'a dog, it is'.split()
]
candidate = 'it is dog'.split()
print('BLEU score -> {}'.format(sentence_bleu(reference, candidate )))
candidate = 'it is a dog'.split()
print('BLEU score -> {}'.format(sentence_bleu(reference, candidate)))
```
### 4\. Calculating the n-gram score
While matching sentences you can choose the number of words you want the model to match at once. For example, you can choose for words to be matched one at a time (1-gram). Alternatively, you can also choose to match words in **pairs (2-gram)** or **triplets (3-grams)**.
In this section we will learn how to calculate these n-gram scores.
In the **sentence\_bleu() function** you can pass an argument with weights corresponding to the individual grams.
For example, to calculate gram scores individually you can use the following weights.
```markup
Individual 1-gram: (1, 0, 0, 0)
Individual 2-gram: (0, 1, 0, 0).
Individual 3-gram: (1, 0, 1, 0).
Individual 4-gram: (0, 0, 0, 1).
```
**Python code for the same is given below:**
```python
from nltk.translate.bleu_score import sentence_bleu
reference = [
'this is a dog'.split(),
'it is dog'.split(),
'dog it is'.split(),
'a dog, it is'.split()
]
candidate = 'it is a dog'.split()
print('Individual 1-gram: %f' % sentence_bleu(reference, candidate, weights=(1, 0, 0, 0)))
print('Individual 2-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 1, 0, 0)))
print('Individual 3-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 1, 0)))
print('Individual 4-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 0, 1)))
```
Output :
```markup
Individual 1-gram: 1.000000
Individual 2-gram: 1.000000
Individual 3-gram: 0.500000
Individual 4-gram: 1.000000
```
Be default the sentence\_bleu() function calculates the **cumulative 4-gram BLEU score**, also called **BLEU-4**. The weights for BLEU-4 are as follows :
```markup
(0.25, 0.25, 0.25, 0.25)
```
**Let's see the BLEU-4 code:**
```python
score = sentence_bleu(reference, candidate, weights=(0.25, 0.25, 0.25, 0.25))
print(score)
```
**Output :**
```markup
0.8408964152537145
```
That's the exact score we got without the n-gram weights added.
## Conclusion
This tutorial was about calculating the BLEU score in Python. We learned what it is and how to calculate individual and cumulative n-gram Bleu scores. Hope you had fun learning with us!]]>Jayant Verma2022-08-03T19:32:48.695Z<![CDATA[Bootstrap Sampling in Python]]>tag:www.digitalocean.com,2005:/community/tutorials/bootstrap-sampling-in-python2022-08-03T19:11:00.476Z _In statistics, Bootstrap Sampling is a method that involves drawing of sample data repeatedly with replacement from a data source to estimate a population parameter._
This basically means that bootstrap sampling is a technique using which you can estimate parameters like mean for an entire population without explicitly considering each and every data point in the population.
Instead of looking at the entire population, we look at multiple subsets all of the same size taken from the population.
For example, if your population size is **1000.** Then to find the mean, instead of considering all the 1000 entries you can take **50 samples of size 4 each** and calculate the mean for each sample. This way you will be taking an average of **200 entries** (50X4) chosen randomly.
A similar strategy is used by market researchers to carry out research in a huge population.
## How to implement Bootstrap Sampling in Python?
Now let's look at how to implement bootstrap sampling in python.
We will generate some random data with a predetermined mean. To do that we are going to use the [NumPy module in Python](/community/tutorials/python-numpy-tutorial).
Let's start by importing the necessary modules.
### 1\. Import the necessary modules.
The modules we need are :
- Numpy
- Random
To import these modules, use :
```python
import numpy as np
import random
```
In the next step, we need to generate some random data. Let's do that using the Numpy module.
### 2\. Generate Random Data
Let's generate a normal distribution with a mean of **300** and with **1000** entries.
The code for that is given below:
```python
x = np.random.normal(loc= 300.0, size=1000)
```
We can calculate the mean of this data using :
```python
print (np.mean(x))
```
Output :
```markup
300.01293472373254
```
Note that this is the actual mean of the population.
### 3\. Use Bootstrap Sampling to estimate the mean
Let's create 50 samples of size 4 each to estimate the mean.
The code for doing that is :
```python
sample_mean = []
for i in range(50):
y = random.sample(x.tolist(), 4)
avg = np.mean(y)
sample_mean.append(avg)
```
The list _sample\_mean_ will contain the mean for all the 50 samples. For estimating the mean of the population we need to calculate the mean for _sample\_mean_.
You can do that using :
```python
print(np.mean(sample_mean))
```
Output :
```markup
300.07261467146867
```
Now if we run the code in this section again then we will get a different output. This is because each time we run the code, we will generate new samples. However, each time the output will be close to the actual mean (300).
On running the code in this section again, we get the following output :
```markup
299.99137705245636
```
Running it again, we get:
```markup
300.13411004148315
```
## Complete code to Implement Bootstrap Sampling in Python
Here's the complete code for this tutorial :
```python
import numpy as np
import random
x = np.random.normal(loc= 300.0, size=1000)
print(np.mean(x))
sample_mean = []
for i in range(50):
y = random.sample(x.tolist(), 4)
avg = np.mean(y)
sample_mean.append(avg)
print(np.mean(sample_mean))
```
## Conclusion
This tutorial was about Bootstrap Sampling in Python. We learned how to estimate the mean of a population by creating smaller samples. This is very useful in the world of Machine Learning to avoid overfitting. Hope you had fun learning with us!]]>Jayant Verma2022-08-03T19:11:00.476Z<![CDATA[MNIST Dataset in Python - Basic Importing and Plotting]]>tag:www.digitalocean.com,2005:/community/tutorials/mnist-dataset-in-python2022-08-03T20:14:07.849ZJayant Verma2022-08-03T20:14:07.849Z<![CDATA[The Sigmoid Activation Function - Python Implementation]]>tag:www.digitalocean.com,2005:/community/tutorials/sigmoid-activation-function-python2022-08-03T20:14:31.615Z=0
```
**You can implement it in Python using:**
```python
def leaky_relu(x):
if x>0 :
return x
else :
return 0.01*x
x = 1.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -10.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 0.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 15.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -20.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
```
Output :
```markup
Applying Leaky Relu on (1.0) gives 1.0
Applying Leaky Relu on (-10.0) gives -0.1
Applying Leaky Relu on (0.0) gives 0.0
Applying Leaky Relu on (15.0) gives 15.0
Applying Leaky Relu on (-20.0) gives -0.2
```
## Conclusion
This tutorial was about the Sigmoid activation function. We learned how to implement and plot the function in python.]]>Jayant Verma2022-08-03T20:14:31.615Z<![CDATA[Vectors in Python - A Quick Introduction!]]>tag:www.digitalocean.com,2005:/community/tutorials/vectors-in-python2022-08-03T17:52:21.318ZSafa Mulani2022-08-03T17:52:21.318Z<![CDATA[ReLu Function in Python]]>tag:www.digitalocean.com,2005:/community/tutorials/relu-function-in-python2022-08-03T18:46:49.062Z 0:
return input
else:
return 0
```
In this tutorial, we will learn how to implement our own ReLu function, learn about some of its disadvantages and learn about a better version of ReLu.
**_Recommended read: [Linear Algebra for Machine Learning \[Part 1/2\]](/community/tutorials/linear-algebra-for-machine-learning-1)_**
Let's get started!
## Implementing ReLu function in Python
Let's write our own implementation of Relu in Python. We will use the inbuilt max function to implement it.
The code for ReLu is as follows :
```python
def relu(x):
return max(0.0, x)
```
To test the function, let's run it on a few inputs.
```python
x = 1.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = -10.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = 0.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = 15.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = -20.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
```
### Complete Code
The complete code is given below :
```python
def relu(x):
return max(0.0, x)
x = 1.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = -10.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = 0.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = 15.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
x = -20.0
print('Applying Relu on (%.1f) gives %.1f' % (x, relu(x)))
```
Output :
```markup
Applying Relu on (1.0) gives 1.0
Applying Relu on (-10.0) gives 0.0
Applying Relu on (0.0) gives 0.0
Applying Relu on (15.0) gives 15.0
Applying Relu on (-20.0) gives 0.0
```
### Gradient of ReLu function
Let's see what would be the gradient (derivative) of the ReLu function. On differentiating we will get the following function :
```python
f'(x) = 1, x>=0
= 0, x<0
```
We can see that for values of x less than zero, the gradient is 0. This means that weights and biases for some neurons are not updated. It can be a problem in the training process.
To overcome this problem, we have the **Leaky ReLu function.** Let's learn about it next.
## Leaky ReLu function
The Leaky ReLu function is an improvisation of the regular ReLu function. To address the problem of zero gradient for negative value, Leaky ReLu gives an extremely small linear component of x to negative inputs.
Mathematically we can express Leaky ReLu as:
```python
f(x)= 0.01x, x<0
= x, x>=0
```
Mathematically:
- **_f(x)=1 (x<0)_**
- **_(αx)+1 (x>=0)(x)_**
Here **_a_** is a small constant like the 0.01 we've taken above.
Graphically it can be shown as :
![Leaky ReLu function in Python](https://journaldev.nyc3.digitaloceanspaces.com/2020/11/Leaky-ReLu.png "Leaky ReLu")
### The gradient of Leaky ReLu
Let's calculate the gradient for the Leaky ReLu function. The gradient can come out to be:
```python
f'(x) = 1, x>=0
= 0.01, x<0
```
In this case, the gradient for negative inputs is non-zero. This means that all the neuron will be updated.
### Implementing Leaky ReLu in Python
The implementation for Leaky ReLu is given below :
```python
def relu(x):
if x>0 :
return x
else :
return 0.01*x
```
Let's try it out onsite inputs.
```python
x = 1.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -10.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 0.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 15.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -20.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
```
### Complete Code
The complete code for Leaky ReLu is given below :
```python
def leaky_relu(x):
if x>0 :
return x
else :
return 0.01*x
x = 1.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -10.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 0.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = 15.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
x = -20.0
print('Applying Leaky Relu on (%.1f) gives %.1f' % (x, leaky_relu(x)))
```
Output :
```markup
Applying Leaky Relu on (1.0) gives 1.0
Applying Leaky Relu on (-10.0) gives -0.1
Applying Leaky Relu on (0.0) gives 0.0
Applying Leaky Relu on (15.0) gives 15.0
Applying Leaky Relu on (-20.0) gives -0.2
```
## Conclusion
This tutorial was about the ReLu function in Python. We also saw an improved version of the ReLu function. The Leaky ReLu solves the problem of zero gradients for negative values in the ReLu function.]]>Jayant Verma2022-08-03T18:46:49.062Z<![CDATA[K-Nearest Neighbors (KNN) in Python]]>tag:www.digitalocean.com,2005:/community/tutorials/k-nearest-neighbors-knn-in-python2022-08-03T20:14:05.413ZIsha Bansal2022-08-03T20:14:05.413Z<![CDATA[Loss Functions in Python - Easy Implementation]]>tag:www.digitalocean.com,2005:/community/tutorials/loss-functions-in-python2022-08-03T20:14:07.195ZJayant Verma2022-08-03T20:14:07.195Z<![CDATA[EDA - Exploratory Data Analysis: Using Python Functions]]>tag:www.digitalocean.com,2005:/community/tutorials/exploratory-data-analysis-python2022-08-03T19:32:54.775ZPrajwal CN2022-08-03T19:32:54.775Z