Why Does My Snapchat AI Have a Story? Has Snapchat AI Been Hacked?

Image
Explore the curious case of Snapchat AI’s sudden story appearance. Delve into the possibilities of hacking and the true story behind the phenomenon. Curious about why your Snapchat AI suddenly has a story? Uncover the truth behind the phenomenon and put to rest concerns about whether Snapchat AI has been hacked. Explore the evolution of AI-generated stories, debunking hacking myths, and gain insights into how technology is reshaping social media experiences. Decoding the Mystery of Snapchat AI’s Unusual Story The Enigma Unveiled: Why Does My Snapchat AI Have a Story? Snapchat AI’s Evolutionary Journey Personalization through Data Analysis Exploring the Hacker Hypothesis: Did Snapchat AI Get Hacked? The Hacking Panic Unveiling the Truth Behind the Scenes: The Reality of AI-Generated Stories Algorithmic Advancements User Empowerment and Control FAQs Why did My AI post a Story? Did Snapchat AI get hacked? What should I do if I’m concerned about My AI? What is My AI...

The Transformer Positional Encoding Layer in Keras, Part 2


Last Updated on January 6, 2023

In half 1, a fragile introduction to positional encoding in transformer fashions, we talked about the positional encoding layer of the transformer model. We moreover confirmed how you’ll implement this layer and its capabilities your self in Python. In this tutorial, you’ll implement the positional encoding layer in Keras and Tensorflow. You can then use this layer in an entire transformer model.

After ending this tutorial, you may know:

  • Text vectorization in Keras
  • Embedding layer in Keras
  • How to subclass the embedding layer and write your private positional encoding layer.

Kick-start your enterprise with my book Building Transformer Models with Attention. It provides self-study tutorials with working code to data you into developing a fully-working transformer model which will
translate sentences from one language to a distinct

Let’s get started.

The transformer positional encoding layer in Keras, half 2
Photo by Ijaz Rafi. Some rights reserved

Tutorial Overview

This tutorial is break up into three parts; they’re:

  1. Text vectorization and embedding layer in Keras
  2. Writing your private positional encoding layer in Keras
    1. Randomly initialized and tunable embeddings
    2. Fixed weight embeddings from Attention Is All You Need
  3. Graphical view of the output of the positional encoding layer

The Import Section

First, let’s write the half to import the entire required libraries:

The Text Vectorization Layer

Let’s start with a set of English phrases which could be already preprocessed and cleaned. The textual content material vectorization layer creates a dictionary of phrases and replaces each phrase with its corresponding index throughout the dictionary. Let’s see how one can map these two sentences using the textual content material vectorization layer:

  1. I’m a robotic
  2. you too robotic

Note the textual content material has already been remodeled to lowercase with the entire punctuation marks and noise throughout the textual content material eradicated. Next, convert these two phrases to vectors of a set dimension 5. The TextVectorization layer of Keras requires a most vocabulary dimension and the required dimension of an output sequence for initialization. The output of the layer is a tensor of type:

(number of sentences, output sequence dimension)

The following code snippet makes use of the adapt method to generate a vocabulary. It subsequent creates a vectorized illustration of the textual content material.

Want to Get Started With Building Transformer Models with Attention?

Take my free 12-day electronic message crash course now (with sample code).

Click to sign-up and likewise get a free PDF Ebook mannequin of the course.

The Embedding Layer

The Keras Embedding layer converts integers to dense vectors. This layer maps these integers to random numbers, which might be later tuned via the teaching part. However, you even have the selection to set the mapping to some predefined weight values (confirmed later). To initialize this layer, you must specify the utmost price of an integer to map, along with the dimensions of the output sequence.

The Word Embeddings

Let’s see how the layer converts the vectorized_text to tensors.

The output has been annotated with some suggestions, as confirmed beneath. Note that you’ll discover a definite output every time you run this code because of the weights have been initialized randomly.

Word embeddings.

Word embeddings. This output will probably be completely completely different every time you run the code as a result of random numbers involved.

The Position Embeddings

You moreover need the embeddings for the corresponding positions. The most positions correspond to the output sequence dimension of the TextVectorization layer.

begin{eqnarray}
P(okay, 2i) &=& sinBig(frac{okay}{n^{2i/d}}Big)
P(okay, 2i+1) &=& cosBig(frac{okay}{n^{2i/d}}Big)
end{eqnarray}
If you could use the similar positional encoding scheme, you’ll specify your private embedding matrix, as talked about partly 1, which reveals discover ways to create your private embeddings in NumPy. When specifying the Embedding layer, you must current the positional encoding matrix as weights along with trainable=False. Let’s create one different positional embedding class that does exactly this.

Next, we prepare each factor to run this layer.

Visualizing the Final Embedding

In order to visualise the embeddings, let’s take two bigger sentences: one technical and the alternative one solely a quote. We’ll prepare the TextVectorization layer along with the positional encoding layer and see what the last word output seems to be like like.

Now let’s see what the random embeddings seem to be for every phrases.

Random embeddings

Random embeddings

 

The embedding from the mounted weights layer are visualized beneath.

Embedding using sinusoidal positional encoding

Embedding using sinusoidal positional encoding

You can see that the embedding layer initialized using the default parameter outputs random values. On the alternative hand, the mounted weights generated using sinusoids create a novel signature for every phrase with knowledge on each phrase place encoded inside it.

You can experiment with tunable or fixed-weight implementations to your particular utility.

Further Reading

This half provides further sources on the topic in case you might be in search of to go deeper.

Books

Papers

Articles

Summary

In this tutorial, you discovered the implementation of positional encoding layer in Keras.

Specifically, you found:

  • Text vectorization layer in Keras
  • Positional encoding layer in Keras
  • Creating your private class for positional encoding
  • Setting your private weights for the positional encoding layer in Keras

Do you possibly can have any questions on positional encoding talked about on this publish? Ask your questions throughout the suggestions beneath, and I’ll do my best to answer.

Learn Transformers and Attention!

Building Transformer Models with Attention

Teach your deep finding out model to be taught a sentence

…using transformer fashions with consideration

Discover how in my new Ebook:
Building Transformer Models with Attention

It provides self-study tutorials with working code to data you into developing a fully-working transformer fashions which will
translate sentences from one language to a distinct

Give magical vitality of understanding human language for
Your Projects

See What’s Inside





Comments

Popular posts from this blog

7 Things to Consider Before Buying Auto Insurance

TransformX by Scale AI is Oct 19-21: Register with out spending a dime!

Why Does My Snapchat AI Have a Story? Has Snapchat AI Been Hacked?