Why Does My Snapchat AI Have a Story? Has Snapchat AI Been Hacked?

Image
Explore the curious case of Snapchat AI’s sudden story appearance. Delve into the possibilities of hacking and the true story behind the phenomenon. Curious about why your Snapchat AI suddenly has a story? Uncover the truth behind the phenomenon and put to rest concerns about whether Snapchat AI has been hacked. Explore the evolution of AI-generated stories, debunking hacking myths, and gain insights into how technology is reshaping social media experiences. Decoding the Mystery of Snapchat AI’s Unusual Story The Enigma Unveiled: Why Does My Snapchat AI Have a Story? Snapchat AI’s Evolutionary Journey Personalization through Data Analysis Exploring the Hacker Hypothesis: Did Snapchat AI Get Hacked? The Hacking Panic Unveiling the Truth Behind the Scenes: The Reality of AI-Generated Stories Algorithmic Advancements User Empowerment and Control FAQs Why did My AI post a Story? Did Snapchat AI get hacked? What should I do if I’m concerned about My AI? What is My AI...

Lagrange Multiplier Approach with Inequality Constraints


Last Updated on March 16, 2023

In a earlier put up, we launched the tactic of Lagrange multipliers to go looking out native minima or native maxima of a carry out with equality constraints. The comparable method may be utilized to those with inequality constraints as properly.

In this tutorial, you will uncover the tactic of Lagrange multipliers utilized to go looking out the native minimal or most of a carry out when inequality constraints are present, optionally together with equality constraints.

After ending this tutorial, you will know

  • How to go looking out elements of native most or minimal of a carry out with equality constraints
  • Method of Lagrange multipliers with equality constraints

Let’s get started.

Lagrange Multiplier Approach with Inequality Constraints

Lagrange Multiplier Approach with Inequality Constraints
Photo by Christine Roy, some rights reserved.

Prerequisites

For this tutorial, we assume that you already have reviewed:

  • Derivative of capabilities
  • Function of a lot of variables, partial derivatives and gradient vectors
  • A fragile introduction to optimization
  • Gradient descent

along with

  • A Gentle Introduction To Method Of Lagrange Multipliers

You can evaluation these concepts by clicking on the hyperlinks above.

Constrained Optimization and Lagrangians

Extending from our earlier put up, a constrained optimization draw back may be sometimes thought-about as

$$
begin{aligned}
min && f(X)
textrm{subject to} && g(X) &= 0
&& h(X) &ge 0
&& okay(X) &le 0
end{aligned}
$$

the place $X$ is a scalar or vector values. Here, $g(X)=0$ is the equality constraint, and $h(X)ge 0$, $okay(X)le 0$ are inequality constraints. Note that we always use $ge$ and $le$ considerably than $gt$ and $lt$ in optimization points because of the earlier outlined a closed set in arithmetic from the place we should at all times seek for the value of $X$. These may be many constraints of each type in an optimization draw back.

The equality constraints are easy to cope with nevertheless the inequality constraints mustn’t. Therefore, one technique to make it less complicated to cope with is to rework the inequalities into equalities, by introducing slack variables:

$$
begin{aligned}
min && f(X)
textrm{subject to} && g(X) &= 0
&& h(X) – s^2 &= 0
&& okay(X) + t^2 &= 0
end{aligned}
$$

When one factor is unfavorable, together with a positive optimistic quantity into it’ll make it equal to zero, and vice versa. That quantity is the slack variable; the $s^2$ and $t^2$ above are examples. We deliberately put $s^2$ and $t^2$ phrases there to point that they need to not be unfavorable.

With the slack variables launched, we are going to use the Lagrange multipliers technique to resolve it, by which the Lagrangian is printed as:

$$
L(X, lambda, theta, phi) = f(X) – lambda g(X) – theta (h(X)-s^2) + phi (okay(X)+t^2)
$$

It is helpful to know that, for the optimum reply $X^*$ to the difficulty, the inequality constraints are each having the equality holds (which the slack variable is zero), or not. For these inequality constraints with their equality preserve are known as the full of life constraints. Otherwise, the inactive constraints. In this sense, you could ponder that the equality constraints are always full of life.

The Complementary Slackness Condition

The trigger we’ve to know whether or not or not a constraint is full of life or not is as a result of Krush-Kuhn-Tucker (KKT) circumstances. Precisely, the KKT circumstances describe what happens when $X^*$ is the optimum reply to a constrained optimization draw back:

  1. The gradient of the Lagrangian carry out is zero
  2. All constraints are glad
  3. The inequality constraints glad complementary slackness state of affairs

The most important of them is the complementary slackness state of affairs. While we realized that optimization draw back with equality constraint may be solved using Lagrange multiplier which the gradient of the Lagrangian is zero on the optimum reply, the complementary slackness state of affairs extends this to the case of inequality constraint by saying that on the optimum reply $X^*$, each the Lagrange multiplier is zero or the corresponding inequality constraint is full of life.

The use of complementary slackness state of affairs is to help us uncover utterly completely different circumstances in fixing the optimization draw back. It is the best to be outlined with an occasion.

Example 1: Mean-variance portfolio optimization

This is an occasion from finance. If we have 1 dollar and have been to work together in two utterly completely different investments, by which their return is modeled as a bi-variate Gaussian distribution. How loads should we put cash into each to cut back the overall variance in return?

This optimization draw back, additionally referred to as Markowitz mean-variance portfolio optimization, is formulated as:

$$
begin{aligned}
min && f(w_1, w_2) &= w_1^2sigma_1^2+w_2^2sigma_2^2+2w_1w_2sigma_{12}
textrm{subject to} && w_1+w_2 &= 1
&& w_1 &ge 0
&& w_1 &le 1
end{aligned}
$$

which the ultimate two are to positive the burden of each funding to between 0 and 1 dollar. Let’s assume $sigma_1^2=0.25$, $sigma_2^2=0.10$, $sigma_{12} = 0.15$ Then the Lagrangian carry out is printed as:

$$
begin{aligned}
L(w_1,w_2,lambda,theta,phi) =& 0.25w_1^2+0.1w_2^2+0.3w_1w_2
&- lambda(w_1+w_2-1)
&- theta(w_1-s^2) – phi(w_1-1+t^2)
end{aligned}
$$

and we have the gradients:

$$
begin{aligned}
frac{partial L}{partial w_1} &= 0.5w_1+0.3w_2-lambda-theta-phi
frac{partial L}{partial w_2} &= 0.2w_2+0.3w_1-lambda
frac{partial L}{partiallambda} &= 1-w_1-w_2
frac{partial L}{partialtheta} &= s^2-w_1
frac{partial L}{partialphi} &= 1-w_1-t^2
end{aligned}
$$

From this stage onward, the complementary slackness state of affairs needs to be thought-about. We have two slack variables $s$ and $t$ and the corresponding Lagrange multipliers are $theta$ and $phi$. We now have to consider whether or not or not a slack variable is zero (which the corresponding inequality constraint is full of life) or the Lagrange multiplier is zero (the constraint is inactive). There are 4 potential circumstances:

  1. $theta=phi=0$ and $s^2>0$, $t^2>0$
  2. $thetane 0$ nevertheless $phi=0$, and $s^2=0$, $t^2>0$
  3. $theta=0$ nevertheless $phine 0$, and $s^2>0$, $t^2=0$
  4. $thetane 0$ and $phine 0$, and $s^2=t^2=0$

For case 1, using $partial L/partiallambda=0$, $partial L/partial w_1=0$ and $partial L/partial w_2=0$ we get

$$
begin{align}
w_2 &= 1-w_1
0.5w_1 + 0.3w_2 &= lambda
0.3w_1 + 0.2w_2 &= lambda
end{align}
$$

which we get $w_1=-1$, $w_2=2$, $lambda=0.1$. But with $partial L/partialtheta=0$, we get $s^2=-1$, which we won’t uncover a reply ($s^2$ cannot be unfavorable). Thus this case is infeasible.

For case 2, with $partial L/partialtheta=0$ we get $w_1=0$. Hence from $partial L/partiallambda=0$, everyone knows $w_2=1$. And with $partial L/partial w_2=0$, we found $lambda=0.2$ and from $partial L/partial w_1$ we get $phi=0.1$. In this case, the goal carry out is 0.1

For case 3, with $partial L/partialphi=0$ we get $w_1=1$. Hence from $partial L/partiallambda=0$, everyone knows $w_2=0$. And with $partial L/partial w_2=0$, we get $lambda=0.3$ and from $partial L/partial w_1$ we get $theta=0.2$. In this case, the goal carry out is 0.25

For case 4, we get $w_1=0$ from $partial L/partialtheta=0$ nevertheless $w_1=1$ from $partial L/partialphi=0$. Hence this case is infeasible.

Comparing the goal carry out from case 2 and case 3, we see that the value from case 2 is lower. Hence that is taken as our reply to the optimization draw back, with the optimum reply attained at $w_1=0$, $w_2=1$.

As an prepare, you could retry the above with $sigma_{12}=-0.15$. The reply might be 0.0038 attained when $w_1=frac{5}{13}$, with the two inequality constraints inactive.

Want to Get Started With Calculus for Machine Learning?

Take my free 7-day e mail crash course now (with sample code).

Click to sign-up and as well as get a free PDF Ebook mannequin of the course.

Example 2: Water-filling algorithm

This is an occasion from communication engineering. If we have a channel (say, a wi-fi bandwidth) by which the noise vitality is $N$ and the signal vitality is $S$, the channel functionality (in terms of bits per second) is proportional to $log_2(1+S/N)$. If we have $okay$ comparable channels, each has its private noise and signal stage, the general functionality of all channels is the sum $sum_i log_2(1+S_i/N_i)$.

Assume we’re using a battery which will give just one watt of vitality and this vitality ought to distribute to the $okay$ channels (denoted as $p_1,cdots,p_k$). Each channel may need utterly completely different attenuation so on the end, the signal vitality is discounted by a purchase $g_i$ for each channel. Then the utmost entire functionality we are going to acquire by using these $okay$ channels is formulated as an optimization draw back

$$
begin{aligned}
max && f(p_1,cdots,p_k) &= sum_{i=1}^okay log_2left(1+frac{g_ip_i}{n_i}correct)
textrm{subject to} && sum_{i=1}^okay p_i &= 1
&& p_1,cdots,p_k &ge 0
end{aligned}
$$

For consolation of differentiation, we uncover $log_2x=log x/log 2$ and $log(1+g_ip_i/n_i)=log(n_i+g_ip_i)-log(n_i)$, due to this fact the goal carry out may be modified with

$$
f(p_1,cdots,p_k) = sum_{i=1}^okay log(n_i+g_ip_i)
$$

Assume we have $okay=3$ channels, each has noise stage of 1.0, 0.9, 1.0 respectively, and the channel purchase is 0.9, 0.8, 0.7, then the optimization draw back is

$$
begin{aligned}
max && f(p_1,p_2,p_k) &= log(1+0.9p_1) + log(0.9+0.8p_2) + log(1+0.7p_3)
textrm{subject to} && p_1+p_2+p_3 &= 1
&& p_1,p_2,p_3 &ge 0
end{aligned}
$$

We have three inequality constraints proper right here. The Lagrangian carry out is printed as

$$
begin{aligned}
& L(p_1,p_2,p_3,lambda,theta_1,theta_2,theta_3)
= & log(1+0.9p_1) + log(0.9+0.8p_2) + log(1+0.7p_3)
& – lambda(p_1+p_2+p_3-1)
& – theta_1(p_1-s_1^2) – theta_2(p_2-s_2^2) – theta_3(p_3-s_3^2)
end{aligned}
$$

The gradient is because of this truth

$$
begin{aligned}
frac{partial L}{partial p_1} & = frac{0.9}{1+0.9p_1}-lambda-theta_1
frac{partial L}{partial p_2} & = frac{0.8}{0.9+0.8p_2}-lambda-theta_2
frac{partial L}{partial p_3} & = frac{0.7}{1+0.7p_3}-lambda-theta_3
frac{partial L}{partiallambda} &= 1-p_1-p_2-p_3
frac{partial L}{partialtheta_1} &= s_1^2-p_1
frac{partial L}{partialtheta_2} &= s_2^2-p_2
frac{partial L}{partialtheta_3} &= s_3^2-p_3
end{aligned}
$$

But now we have 3 slack variables and we have to consider 8 circumstances:

  1. $theta_1=theta_2=theta_3=0$, due to this fact none of $s_1^2,s_2^2,s_3^2$ are zero
  2. $theta_1=theta_2=0$ nevertheless $theta_3ne 0$, due to this fact solely $s_3^2=0$
  3. $theta_1=theta_3=0$ nevertheless $theta_2ne 0$, due to this fact solely $s_2^2=0$
  4. $theta_2=theta_3=0$ nevertheless $theta_1ne 0$, due to this fact solely $s_1^2=0$
  5. $theta_1=0$ nevertheless $theta_2,theta_3$ non-zero, due to this fact solely $s_2^2=s_3^2=0$
  6. $theta_2=0$ nevertheless $theta_1,theta_3$ non-zero, due to this fact solely $s_1^2=s_3^2=0$
  7. $theta_3=0$ nevertheless $theta_1,theta_2$ non-zero, due to this fact solely $s_1^2=s_2^2=0$
  8. all of $theta_1,theta_2,theta_3$ are non-zero, due to this fact $s_1^2=s_2^2=s_3^2=0$

Immediately we are going to inform case 8 is infeasible since from $partial L/partialtheta_i=0$ we are going to make $p_1=p_2=p_3=0$ nevertheless it could’t make $partial L/partiallambda=0$.

For case 1, we have
$$
frac{0.9}{1+0.9p_1}=frac{0.8}{0.9+0.8p_2}=frac{0.7}{1+0.7p_3}=lambda
$$
from $partial L/partial p_1=partial L/partial p_2=partial L/partial p_3=0$. Together with $p_3=1-p_1-p_2$ from $partial L/partiallambda=0$, we found the reply to be $p_1=0.444$, $p_2=0.430$, $p_3=0.126$, and the goal carry out $f(p_1,p_2,p_3)=0.639$.

For case 2, we have $p_3=0$ from $partial L/partialtheta_3=0$. Further, using $p_2=1-p_1$ from $partial L/partiallambda=0$, and
$$
frac{0.9}{1+0.9p_1}=frac{0.8}{0.9+0.8p_2}=lambda
$$
from $partial L/partial p_1=partial L/partial p_2=0$, we are going to clear up for $p_1=0.507$ and $p_2=0.493$. The purpose carry out $f(p_1,p_2,p_3)=0.634$.

Similarly in case 3, $p_2=0$ and we solved $p_1=0.659$ and $p_3=0.341$, with the goal carry out $f(p_1,p_2,p_3)=0.574$.

In case 4, we have $p_1=0$, $p_2=0.652$, $p_3=0.348$, and the goal carry out $f(p_1,p_2,p_3)=0.570$.

Case 5 we have $p_2=p_3=0$ and due to this fact $p_3=1$. Thus we have the goal carry out $f(p_1,p_2,p_3)=0.0.536$.

Similarly in case 6 and case 7, we have $p_2=1$ and $p_1=1$ respectively. The purpose carry out attained 0.531 and 0.425 respectively.

Comparing all these circumstances, we found that the utmost value that the goal carry out attained is in case 1. Hence the reply to this optimization draw back is
$p_1=0.444$, $p_2=0.430$, $p_3=0.126$, with $f(p_1,p_2,p_3)=0.639$.

Extensions and Further Reading

While throughout the above occasion, we launched the slack variables into the Lagrangian carry out, some books may need to not add the slack variables nevertheless to limit the Lagrange multipliers for inequality constraints as optimistic. In that case you could even see the Lagrangian carry out written as

$$
L(X, lambda, theta, phi) = f(X) – lambda g(X) – theta h(X) + phi okay(X)
$$

nevertheless requires $thetage 0;phige 0$.

The Lagrangian carry out can be useful to make use of to primal-dual technique for finding the utmost or minimal. This is particularly helpful if the goals or constraints are non-linear, which the reply won’t be merely found.

Some books that covers this topic are:

Summary

In this tutorial, you discovered how the tactic of Lagrange multipliers may be utilized to inequality constraints. Specifically, you realized:

  • Lagrange multipliers and the Lagrange carry out in presence of inequality constraints
  • How to utilize KKT circumstances to resolve an optimization draw back when inequality constraints are given

Get a Handle on Calculus for Machine Learning!

Calculus For Machine Learning

Feel Smarter with Calculus Concepts

…by getting a higher sense on the calculus symbols and phrases

Discover how in my new Ebook:
Calculus for Machine Learning

It provides self-study tutorials with full working code on:
differntiation, gradient, Lagrangian mutiplier technique, Jacobian matrix,
and way more…

Bring Just Enough Calculus Knowledge to
Your Machine Learning Projects

See What’s Inside





Comments

Popular posts from this blog

7 Things to Consider Before Buying Auto Insurance

TransformX by Scale AI is Oct 19-21: Register with out spending a dime!

Why Does My Snapchat AI Have a Story? Has Snapchat AI Been Hacked?