Skip to content

Commit

Permalink
updated
Browse files Browse the repository at this point in the history
  • Loading branch information
NIXBLACK11 committed Dec 7, 2023
1 parent 9d616e1 commit da22645
Showing 1 changed file with 114 additions and 2 deletions.
116 changes: 114 additions & 2 deletions tasks/SentimentAnalysis/SentimentAnalysis.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -935,13 +935,125 @@
"id": "SOxFqEdwcejj"
},
"source": [
"## Step 14:Zero-shot Sentiment Prediction for Multilingual Texts\n",
"## Step 14:Sentiment Prediction for Multilingual Texts\n",
"\n",
"This step involves iterating through a collection of sentiments expressed in various languages, including English, Hindi, Portuguese, Romanian, Slovenian, Chinese, French, Dutch, Russian, Italian, and Bosnian.\n",
"\n",
"This process demonstrates the model's ability to analyze sentiments across diverse linguistic contexts and still yeild same output."
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "vjFvWEC0UOj0",
"outputId": "df2356ca-9fc7-46a2-ad85-9dde8f5d7f2b"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"English: Said something harsh and didn't even realize it's harsh until I said it.. Sorry\n",
"1/1 [==============================] - 0s 19ms/step\n",
"Predicted Sentiment: negative\n",
"Hindi: कुछ कड़ाई बातें कहीं और मैंने यह तक महसूस नहीं किया कि यह कड़ाई है जब तक मैंने यह कहा.. माफ़ करें\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"100%|██████████| 608M/608M [00:12<00:00, 48.9MB/s]\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"1/1 [==============================] - 0s 32ms/step\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.10/dist-packages/fairseq/models/transformer/transformer_encoder.py:281: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. (Triggered internally at ../aten/src/ATen/NestedTensorImpl.cpp:178.)\n",
" x = torch._nested_tensor_from_mask(\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Predicted Sentiment: negative\n",
"Portuguese: Disse algo duro e nem percebi que era duro até dizer.. Desculpe\n",
"1/1 [==============================] - 0s 30ms/step\n",
"Predicted Sentiment: negative\n",
"Romanian: Am spus ceva dur și nici măcar nu mi-am dat seama că e dur până când am spus asta.. Scuze\n",
"1/1 [==============================] - 0s 22ms/step\n",
"Predicted Sentiment: negative\n",
"Slovenian: Rekel sem nekaj ostrega in sploh nisem ugotovil, da je ostro, dokler nisem rekel.. Oprosti\n",
"1/1 [==============================] - 0s 20ms/step\n",
"Predicted Sentiment: negative\n",
"Chinese: 说了一些刻薄的话,甚至直到我说出来我才意识到它很刻薄.. 抱歉\n",
"1/1 [==============================] - 0s 26ms/step\n",
"Predicted Sentiment: negative\n",
"French: Ai dit quelque chose de dur et je n'ai même pas réalisé que c'était dur jusqu'à ce que je le dise.. Désolé\n",
"1/1 [==============================] - 0s 21ms/step\n",
"Predicted Sentiment: negative\n",
"Dutch: Iets hards gezegd en realiseerde me niet eens dat het hard was tot ik het zei.. Sorry\n",
"1/1 [==============================] - 0s 18ms/step\n",
"Predicted Sentiment: negative\n",
"Russian: Сказал что-то резкое и даже не осознал, насколько это резкое, пока не сказал.. Извините\n",
"1/1 [==============================] - 0s 19ms/step\n",
"Predicted Sentiment: negative\n",
"Italian: Ho detto qualcosa di duro e non me ne sono nemmeno reso conto finché non l'ho detto.. Scusa\n",
"1/1 [==============================] - 0s 27ms/step\n",
"Predicted Sentiment: negative\n",
"Bosnian: Rekao nešto oštro i čak nisam shvatio da je oštro dok nisam rekao.. Žao mi je\n",
"1/1 [==============================] - 0s 19ms/step\n",
"Predicted Sentiment: negative\n"
]
}
],
"source": [
"sentiments = {\n",
" \"english\": \"Said something harsh and didn't even realize it's harsh until I said it.. Sorry\",\n",
" 'hindi': \"कुछ कड़ाई बातें कहीं और मैंने यह तक महसूस नहीं किया कि यह कड़ाई है जब तक मैंने यह कहा.. माफ़ करें\",\n",
" 'portuguese': \"Disse algo duro e nem percebi que era duro até dizer.. Desculpe\",\n",
" 'romanian': \"Am spus ceva dur și nici măcar nu mi-am dat seama că e dur până când am spus asta.. Scuze\",\n",
" 'slovenian': \"Rekel sem nekaj ostrega in sploh nisem ugotovil, da je ostro, dokler nisem rekel.. Oprosti\",\n",
" 'chinese': \"说了一些刻薄的话,甚至直到我说出来我才意识到它很刻薄.. 抱歉\",\n",
" 'french': \"Ai dit quelque chose de dur et je n'ai même pas réalisé que c'était dur jusqu'à ce que je le dise.. Désolé\",\n",
" 'dutch': \"Iets hards gezegd en realiseerde me niet eens dat het hard was tot ik het zei.. Sorry\",\n",
" 'russian': \"Сказал что-то резкое и даже не осознал, насколько это резкое, пока не сказал.. Извините\",\n",
" 'italian': \"Ho detto qualcosa di duro e non me ne sono nemmeno reso conto finché non l'ho detto.. Scusa\",\n",
" 'bosnian': \"Rekao nešto oštro i čak nisam shvatio da je oštro dok nisam rekao.. Žao mi je\"\n",
"}\n",
"\n",
"# Iterate through the dictionary and extract values\n",
"for language, sentiment in sentiments.items():\n",
" print(f\"{language.capitalize()}: {sentiment}\")\n",
" encoder = LaserEncoderPipeline(lang=language)\n",
" # Now, you can use the trained model to predict the sentiment of user input\n",
" user_text = sentiment\n",
" user_text_embedding = encoder.encode_sentences([user_text])[0]\n",
" user_text_embedding = np.reshape(user_text_embedding, (1, -1))\n",
"\n",
" predicted_sentiment = np.argmax(model.predict(user_text_embedding))\n",
" predicted_sentiment_no = label_encoder.inverse_transform([predicted_sentiment])[0]\n",
" if predicted_sentiment_no == 0:\n",
" predicted_sentiment_label = 'negative'\n",
" elif predicted_sentiment_no == 1:\n",
" predicted_sentiment_label = 'positive'\n",
"\n",
" print(f\"Predicted Sentiment: {predicted_sentiment_label}\")"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand All @@ -967,4 +1079,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}

0 comments on commit da22645

Please sign in to comment.