Honestly, for me, understanding the recursive version of Binary Exponentiation was way easier than understanding its iterative version. And since you are here, I guess it might be the same for you. Anyway, let’s get straight to the point.

You may have noticed it already, as it’s pretty obvious.**Every decimal number can be expressed as the sum of some powers of 2.** For instance, 13 = 1 + 4 + 8 = 2⁰ + 2² + 2³; which indeed is how we convert any binary number to its decimal equivalent at the first place. …

Contest: Educational Codeforces Round 99

This article is about the thinking process and solution of a question from Educational Codeforces Round 99. Here’s the link to the question.

You are given a sequence **A** consisting of **n** integers [a1,a2,…,an] and an integer **x**. Your task is to sort the sequence in non-decreasing order.

For sorting, you are allowed to perform the following operation any number of times(possibly zero):**Choose an integer i such that 1 ≤ i ≤ n and ai > x (i.e., …**

High up on the hills,

Stood an old Victorian house

built in the Gothic style.

Deserted, it had been

For over a century now.

Ignorant, so it is

To the changes, it is going through.

Glasses all shattered

Yet the panes locked tight.

Brocade curtains clinging

to their fading shine.

Walls hung with faded silks,

Echoing a long monotonous story

of the dynasty that lived.

Dusty old portraits

Swing with the dull song.

The dynasty is long gone,

But the house still stands tall.

Hey, we have got too cozy.

haven’t we?

For your shrills no longer feels like an imposter

in my musical land.

Nor do your punches disturbs me off my daily plans,

Mind you, They do Hurt!*(And of course, nobody’s asking who provoked you.)*Your ‘artificial’ giggles, though sometimes irritating,

forces me to laugh along

dragging me off the roller coaster,

which anyways, would have kicked me off soon.

Almost 19 years have passed,

from when we first got to know each other.

You were a skinny girl,

and so was I.

You were arrogant and I was so modest.

Ask Mom if you doubt,

which you shouldn’t

for how else you reckon

we’ve shared our room for so…

Hemmed in by colossal waves,

is a ship seen never before.

Strong winds are slowly strangling

all the breaths it could ever hold.

Night so ruthless harbors

darkness in its depth.

Trapped souls struggling, though

never had anyone escaped.

But the ocean shows no mercy,

and the moon hides behind the cloud.

Still, the ship refuses to sink.

Still, the footsteps refuse to stop.

K-nearest neighbors(in short KNNs) are a robust yet straightforward supervised machine learning algorithm. Though its regression variant also exists, we’ll only discuss its classification form.

KNN is a *non-parametric**lazy learning* algorithm. Let’s have a look at what each term means —

**Non-parametric**: Unlike the parametric models, the non-parametric models are more flexible. They don’t have any preconceived notions about the number of parameters or the functional form of the hypothesis. They are thus saving us from the trouble of any wrong assumption about the data distribution. …

And with her last breath

She broke all her promises.

Knowing she could never keep

Yet for their sake, she thought she could.

Like a butterfly for whom

life’s all about flowers.

Like clouds above lost

in the world of imagination.

Her dreams were big.

But …

she strangled her dreams,

Chopped her wings.

She accepted a life

she could never live.

Yet for their sake, she thought she could.

With each passing second

She exchanged her breaths for their smile.

All-day, all-night she stood there

emptying her palette, painting their lives

while her canvas desperately waited

for a stroke from her side.

Caught up in their expectations and her promises

Somewhere she lost who she was.

Her only escape…

In my previous article, I have discussed Linear Regression with one variable. There I have briefly covered the Linear regression algorithm, cost function, and gradient descent. These concepts form the basis for this article. So if you’re not that comfortable with them, consider referring to my previous article.

So far we have considered a simple problem, where the output variable depended on just one feature(i.e x).

Eg: Predicting house prices, where prices of a house just depend on its size.

But often, we don’t have that simple dataset. Most of the time, the output variable depends on more than one…

Linear Regression is probably the very first learning algorithm that one encounters in their early machine learning journey.

It is the oldest and widely used supervised learning algorithm.

Well, any machine learning problem can be assigned to one of the two broad classifications:

Supervised learning and Unsupervised learning

In Supervised learning, we are given a data set and already know what our output should look like. All our algorithm does is, figure out a relationship between the input and the output and using that relationship predict even more correct answers for test inputs.

Supervised learning problems are further categorized into…

Sieve of Eratosthenes is an algorithm for finding all prime numbers in any given range. Here are the topics that I have tried to cover in this article.

- What are prime numbers?
- Naive way of finding prime numbers.
- Sieve of Eratosthenes

(sounds more like a king’s name!**No Offence intended.**) - Segmented Sieve
- Finding prime number in a particular range

So if you are interested in any of these topics, then perhaps this article is for you.

No doubt, Every number is beautiful.

But Prime number’s beauty just cannot go unnoticed!

The Prime numbers are those positive Natural numbers which cannot…