*Main Queue*All of my main podcasts I listen to.*Global Queue*All of podcasts that either I can’t listen to at this time (spoilers, typically) or a series I am going through a

*Main Queue*All of my main podcasts I listen to.*Global Queue*All of podcasts that either I can’t listen to at this time (spoilers, typically) or a series I am going through a back-catalog of.

This is a very nice scheme, until you look at it in my podcast player.

The playlist I listen to 99% of the time is at the bottom — and this frustrated me. *A lot.*

Fortunately, I found a way to rearrange alphabetically-ordered lists: *paste a leading zero width space in front of items that you want to be lower in the list*.

The zero width space comes after alphabetical letters when sorted, so naturally it will be put down further in the list. If there are more than two items on the list, keep adding leading zero width spaces to put the item further down.

An easy way to get a zero width space on the clipboard can be found here.

Sanity restored.

]]>$$e = 2.71828182845$$

There are also quite a few ways of deriving Euler’s Number. There’s the Taylor expansion method:

$$e = \sum _{k=0} ^{\infty} \frac{1}{k!}$$

There is also the classical limit:

$$e = \lim_{n \rightarrow \infty} \left(

]]>$$e = 2.71828182845$$

There are also quite a few ways of deriving Euler’s Number. There’s the Taylor expansion method:

$$e = \sum _{k=0} ^{\infty} \frac{1}{k!}$$

There is also the classical limit:

$$e = \lim_{n \rightarrow \infty} \left( 1 + \frac{1}{n} \right)^n$$

Then there is a unique way of calculating. Let $R$ be a random number generated between $[0, 1]$, inclusive. Then $e$ is the average of the number of $R$s it takes to sum greater than $1$. In other words, keeping adding a random numbers with bounds $[0, 1]$ together until you exceed one, average a bunch of attempts together, and you should get $e$.

With more rigor, for uniform $(0, 1)$ random independent variables $R_1$, $R_2$, $\ldots$, $R_n$,

$$N = min \left\{ n: \sum_{k=1} ^{\infty} R_i > 1 \right\}$$

then we can calculate $e$ as such:

$$e = Average(N)$$

The proof can be found here, but it is pretty rigorous. Instead, an easier method is to write a program to verify for large enough $n$.

For $n= 1,000,000,000$^{1}, we have the following results

e | Sum Solution | Limit Solution | Random Uniform Variable |
---|---|---|---|

2.7182818284 | 2.7182818284 | 2.7182820520 | 2.718250315 |

Feel free to checkout the code below, or skim the all of the data to see the asymptotic approach towards $e$.

n | Sum Solution | Limit Solution | Random Uniform Variable |
---|---|---|---|

1 | 2 | 2 | 2 |

10 | 2.7182818011 | 2.5937424601 | 2.5 |

100 | 2.7182818284 | 2.7048138294 | 2.69 |

1000 | 2.7182818284 | 2.7169239322 | 2.717 |

10000 | 2.7181459268 | 2.7242 | |

100000 | 2.7182682371 | 2.71643 | |

1000000 | 2.7182804690 | 2.71961 | |

10000000 | 2.7182816941 | 2.7182017 | |

100000000 | 2.7182817983 | 2.71818689 | |

1000000000 | 2.7182820520 | 2.718250315 |

```
import random
import math
import sys
from decimal import Decimal
def e_sum(upper_bound):
if upper_bound < 0:
return 0
return Decimal(1.0) / Decimal(math.factorial(upper_bound)) + Decimal(e_sum(upper_bound - 1))
def e_limit(n):
return Decimal((1 + 1.0 / float(n))**n)
def find_greater_than_one(value=0, attempts=0):
if value <= 1:
return find_greater_than_one(value + random.uniform(0, 1), attempts + 1)
return attempts
```

For all except the sum calculation. Because it is summation, with a factorial inside the summation, calculating takes a long time (and if done recursively, and stack overflow). ↩

The problem was House of Cards, and the problem was this (skip for TL;DR at

]]>The problem was House of Cards, and the problem was this (skip for TL;DR at bottom):

Brian and Susan are old friends, and they always dare each other to do reckless things. Recently Brian had the audacity to take the bottom right exit out of their annual maze race, instead of the usual top left one. In order to trump this, Susan needs to think big. She will build a house of cards so big that, should it topple over, the entire country would be buried by cards. It’s going to be huge! The house will have a triangular shape. The illustration to the right shows a house of height $6$ and Figure 1 shows a schematic figure of a house of height $5$.

Figure 1

For aesthetic reasons, the cards used to build the tower should feature each of the four suits (clubs, diamonds, hearts, spades) equally often. Depending on the height of the tower, this may or may not be possible. Given a lower bound $h_0$ on the height of the tower, what is the smallest possible height $h \geq h_0$ such that it is possible to build the tower?

TL;DR: Using Figure 1 as a reference, you are given a lower bound on a height for a tower of cards. However, there must be **an equal distribution** of all four suites; clubs, diamonds, hearts, and spades.

This implies that the number of cards have to be divisible by $4$. Seeing as the input was huge $1 \leq h_0 \leq 10^{1000}$, there was no brute forcing this. So, first thought: turn this into a closed-form series, and solve the series.

Getting the values for the first five heights, I got the following set:

$${2, 7, 15, 25, 40, \ldots}$$

I was able to turn this set into a series quite easily:

$$\sum_{n = 1} ^{h_0} \left(3n - 1\right)$$

This turned into the following equation:

$$\frac{1}{2} h_0(3h_0 + 1)$$

So, all I had to do was plug $h_0$ in the equation, and increment while the number was not divisible by $4$. Then, I realized how large the input really was. The input size ($1 \cdot 10^{1000}$) was orders of magnitudes larger than typical, large data types would allow ($1.84 \cdot 10^{19}$).

I realized this couldn’t be tested against a intensive data set, because there is only one number to calculate. I thought, since the series always subtracts one, the minimum times I must increment should roughly be four. Keeping this in mind, I decided to use Python. Python can work with arbitrarily large numbers, making it ideal in this situation.

I sat down, hoped for the best, and wrote the following code.

```
def getNumberOfCards(x):
return (3*pow(x, 2) + x) // 2
height = int(input())
while (getNumberOfCards(height) % 4 != 0):
height += 1
print(height)
```

With a run time of 0.02 seconds, it worked.

]]>

I could not get the updates page opened. Killed App Store. Restart computer. Everything.

After doing some research, I discovered there is a way to update software from the terminal: `softwareupdate`

. So, after running one command (`sudo softwareupdate -iv`

), I am writing this for the latest version of macOS Sierra.

- C/C++
- Python
- Lua
- LaTeX
- Perl

However, in the last few months I started using Vim. *Heavily*. So much so I was trying to use Vim command in the CodeRunner buffers. So I decided I wanted to have the functionality, and in vim-esque fashion, I mapped to my leader key: `<leader>r`

. The mnemonic `<leader>r`

un helped me remember the command on the first few tries.

To get the functionality, just add the following to your .vimrc.

```
function! MakeIfAvailable()
if filereadable("./makefile")
make
elseif (&filetype == "cpp")
execute("!clang++ -std=c++14" + bufname("%"))
execute("!./a.out")
elseif (&filetype == "c")
execute("!clang -std=c11" + bufname("%"))
execute("!./a.out")
elseif (&filetype == "tex")
execute("!xelatex" + bufname("%"))
execute("!open" + expand(%:r) + ".pdf")
endif
endfunction
augroup spaces
autocmd!
autocmd FileType c nnoremap <leader>r :call MakeIfAvailable()<cr>
autocmd FileType cpp nnoremap <leader>r :call MakeIfAvailable()<cr>
autocmd FileType tex nnoremap <leader>r :call MakeIfAvailable()<cr>
autocmd FileType python nnoremap <leader>r :exec '!python' shellescape(@%, 1)<cr>
autocmd FileType perl nnoremap <leader>r :exec '!perl' shellescape(@%, 1)<cr>
autocmd FileType sh nnoremap <leader>r :exec '!bash' shellescape(@%, 1)<cr>
autocmd FileType swift nnoremap <leader>r :exec '!swift' shellescape(@%, 1)<cr>
nnoremap <leader>R :!<Up><CR>
augroup END
```

]]>So, skipping the implementation details (you can read about them below), I wrote a script that parsed *165,607 posts* to get a general idea of what “r/all” does in a given day. **Note the times are for Pacific Time (PT) zone.**

Let’s start with number of posts created at any given time.

So a pretty good trend. What about the mean upvote count?

Okay, not so pretty. Let’s filter out the outliers by replacing anything above 300 upvotes with a -1.

A decent trend until around 12:00 to 14:00, where everything goes sporadic. Let’s take a look at the median upvotes.

Again, let’s filter the outliers (anything above 1000).

So we can safely assume a lot of reddit posts are upvote counts are zero (actually, the mode — the most commonly occurring upvote count — is by far zero).

But what about the max?

So far, these are pretty useless. But, there is one good litmus test for popularity: reaching over 1,000 upvotes. Let's take a look at that.

There looks like there could be something here. Let's switch to line graph and only sum to the hour.

Now that's a trend. Adding a cubic spline, we get a beautiful line graph:

*It seems quite apparent that the best time to post is around 14:00 to 15:00*. Neat.

I used the praw python package to download and parse all of the data. Unfortunately Reddit was having issues handling over 100,000 consecutive request (around 100/second), so my connection would be severed constantly. I had to manually restart where I left off.

The code is as follows

```
import praw
import datetime
from functools import reduce
import statistics
import time
def get_date(submission):
time = submission.created
return datetime.datetime.fromtimestamp(time)
def date_to_unix_second(t):
return (t-datetime.datetime(1970,1,1)).total_seconds()
# Thanks https://stackoverflow.com/questions/10797819/finding-the-mode-of-a-list
def mode(numbers, out_mode):
counts = {k:numbers.count(k) for k in set(numbers)}
modes = sorted(dict(filter(lambda x: x[1] == max(counts.values()), counts.items())).keys())
if out_mode=='smallest':
return modes[0]
elif out_mode=='largest':
return modes[-1]
else:
return modes
reddit = praw.Reddit(user_agent='bot1', client_id='', client_secret='', redirect_uri='' 'authorize_callback')
subreddit = reddit.subreddit('all')
day1 = datetime.datetime(2017, 5, 26, hour=0, minute=0, second=0)
day2 = datetime.datetime(2017, 5, 27, hour=0, minute=0, second=0)
current_day = day2
while current_day != day1:
current_minute_upvotes = []
current_minute = 0
total = 0
for submission in subreddit.submissions(date_to_unix_second(day1), date_to_unix_second(current_day) - 60*60):
submission_date = get_date(submission)
if submission_date.minute != current_minute:
current_minute = submission_date.minute
print("{0}:{1},{2},{3},{4},{5},{6},{7}".format(
submission_date.hour if submission_date.hour > 9 else "0" + str(submission_date.hour),
submission_date.minute if submission_date.minute > 9 else "0" + str(submission_date.minute),
len(current_minute_upvotes),
0 if not len(current_minute_upvotes) else reduce(lambda x, y: x + y, current_minute_upvotes, 0) / len(current_minute_upvotes),
0 if not len(current_minute_upvotes) else current_minute_upvotes[int(len(current_minute_upvotes)/2)],
0 if not len(current_minute_upvotes) else mode(current_minute_upvotes,"largest"),
0 if not len(current_minute_upvotes) else max(current_minute_upvotes),
0 if not len(current_minute_upvotes) else min(current_minute_upvotes),
))
total += len(current_minute_upvotes)
current_minute_upvotes = []
current_minute_upvotes += [submission.score]
current_day = submission_date
print(total)
```

For the spline, the code is as follows:

```
delta_t = max(x) - min(x)
N_points = 300
xnew = [min(x) + i*delta_t/N_points for i in range(N_points)]
x_ts = [x_.timestamp() for x_ in x]
xnew_ts = [x_.timestamp() for x_ in xnew]
ynew = spline(x_ts, y, xnew_ts)
```

The graphs were produced by matplotlib, the code is below.

```
#!/usr/local/bin/python3
#
# plot.py
# Desktop
#
# Created by Illya Starikov on 05/16/17.
# Copyright 2017. Illya Starikov. All rights reserved.
#
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
import csv
def import_from_csv(filename):
with open(filename, "rt") as csvfile:
csv_parsor = csv.reader(csvfile, delimiter=',')
date_times = []
values = []
for date_and_time_string, value in csv_parsor:
datetime_object = datetime.strptime(date_and_time_string, '%H:%M')
date_times += [datetime_object.replace(day=26, month=5, year=2017)]
values += [value]
return (date_times, values)
def main():
# Get the data and the subplots
x, y = import_from_csv('data.csv')
fig, ax = plt.subplots()
# Make the figure wide and draw it
fig.set_size_inches((12, 5))
ax.plot_date(x, y, color='b', marker='+', markersize=6)
# Custom labels, since I have milestones
labels = ["1:00", "2:00", "3:00", "4:00", "5:00", "6:00", "7:00", "8:00", "9:00", "10:00", "11:00", "12:00", "13:00", "14:00", "15:00", "16:00", "17:00", "18:00", "19:00", "20:00", "21:00", "22:00", "23:00"]
locs = [
mdates.date2num(datetime(2017, 5, 26, 1, 00)),
mdates.date2num(datetime(2017, 5, 26, 2, 00)),
mdates.date2num(datetime(2017, 5, 26, 3, 00)),
mdates.date2num(datetime(2017, 5, 26, 4, 00)),
mdates.date2num(datetime(2017, 5, 26, 5, 00)),
mdates.date2num(datetime(2017, 5, 26, 6, 00)),
mdates.date2num(datetime(2017, 5, 26, 7, 00)),
mdates.date2num(datetime(2017, 5, 26, 8, 00)),
mdates.date2num(datetime(2017, 5, 26, 9, 00)),
mdates.date2num(datetime(2017, 5, 26, 10, 00)),
mdates.date2num(datetime(2017, 5, 26, 11, 00)),
mdates.date2num(datetime(2017, 5, 26, 12, 00)),
mdates.date2num(datetime(2017, 5, 26, 13, 00)),
mdates.date2num(datetime(2017, 5, 26, 14, 00)),
mdates.date2num(datetime(2017, 5, 26, 15, 00)),
mdates.date2num(datetime(2017, 5, 26, 16, 00)),
mdates.date2num(datetime(2017, 5, 26, 17, 00)),
mdates.date2num(datetime(2017, 5, 26, 18, 00)),
mdates.date2num(datetime(2017, 5, 26, 19, 00)),
mdates.date2num(datetime(2017, 5, 26, 20, 00)),
mdates.date2num(datetime(2017, 5, 26, 21, 00)),
mdates.date2num(datetime(2017, 5, 26, 22, 00)),
mdates.date2num(datetime(2017, 5, 26, 23, 00))
]
# Assign the data, change fonts to Bebas Neue, make titles and labels
ax.set_xticklabels(labels, fontname="Bebas Neue")
ax.set_yticklabels([int(i) for i in ax.get_yticks()], fontname="Bebas Neue")
ax.set_title("Title", fontsize=22, fontname="Bebas Neue")
ax.set_ylabel("Upvotes", fontsize=14, fontname="Bebas Neue")
ax.set_xlabel("Time of Day", fontsize=14, fontname="Bebas Neue")
ax.set_xticks(locs)
# Make nicer x axis
plt.gcf().autofmt_xdate()
# Export that shit
ax.grid()
plt.draw()
plt.savefig("figure.png", format='png', dpi=280)
if __name__ == "__main__":
main()
```

]]>- It can screw up string literals
- It can break expectations in a text editor (i.e. jumping to a new line or the end of the line)
- It can actually break

- It can screw up string literals
- It can break expectations in a text editor (i.e. jumping to a new line or the end of the line)
- It can actually break programming languages
- It is just unflattering

However, in Vim, it takes one `autocmd`

to alleviate this.

```
augroup spaces
autocmd!
autocmd BufWritePre * %s/\s\+$//e
augroup END
```

On every buffer save substitute spaces at the end of the line with nothing. Easy!

]]>Here are the results (*click to enlarge*).

There were four major milestones to bring

]]>Here are the results (*click to enlarge*).

There were four major milestones to bring out.

**Start Exam**Well, I start the exam.**Through Exam**I don't generally do entire problems at once, but find a good stopping point, move on, and come back to double check everything.**Finish Exam**I finished all the problems.**First Review**I generally review twice, and this is where I finished the first review.**Leave Exam**I Finished the second review and I'm going back to bed.

The interesting parts were the spikes, and those were generally problems I was either stumped on, or had hoped they would not be on the exam.

The graph was produced via matplotlib. You can find the source code below.

```
#!/usr/local/bin/python3
#
# plot.py
# Desktop
#
# Created by Illya Starikov on 05/16/17.
# Copyright 2017. Illya Starikov. All rights reserved.
#
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime
import csv
def import_from_csv(filename):
with open(filename, "rt") as csvfile:
csv_parsor = csv.reader(csvfile, delimiter=',')
date_times = []
values = []
for date_and_time_string, value in csv_parsor:
datetime_object = datetime.strptime(date_and_time_string, '%Y-%m-%d %H:%M:%S')
date_times += [datetime_object]
values += [value]
return (date_times, values)
def main():
# Get the data and the subplots
x, y = import_from_csv('data.csv')
fig, ax = plt.subplots()
# Make the figure wide and draw it
fig.set_size_inches((12, 5))
ax.plot_date(x, y, color='r', marker='x', markersize=6)
# Custom labels, since I have milestones
labels = ["Start Exam", "", "10:15", "", "Through Exam", "10:45", "11:00", "Finish Exam", "", "11:30", "First Review", "Leave Exam", "12:00", "12:15"]
locs = [
mdates.date2num(datetime(2017, 5, 10, 10, 00)),
mdates.date2num(datetime(2017, 5, 10, 10, 00)),
mdates.date2num(datetime(2017, 5, 10, 10, 15)),
mdates.date2num(datetime(2017, 5, 10, 10, 30)),
mdates.date2num(datetime(2017, 5, 10, 10, 30)),
mdates.date2num(datetime(2017, 5, 10, 10, 45)),
mdates.date2num(datetime(2017, 5, 10, 11, 00)),
mdates.date2num(datetime(2017, 5, 10, 11, 13)),
mdates.date2num(datetime(2017, 5, 10, 11, 15)),
mdates.date2num(datetime(2017, 5, 10, 11, 30)),
mdates.date2num(datetime(2017, 5, 10, 11, 35)),
mdates.date2num(datetime(2017, 5, 10, 11, 45)),
mdates.date2num(datetime(2017, 5, 10, 11, 48)),
mdates.date2num(datetime(2017, 5, 10, 12, 00)),
]
# Assign the data, change fonts to Bebas Neue, make titles and labels
ax.set_xticklabels(labels, fontname="Bebas Neue")
ax.set_yticklabels([int(i) for i in ax.get_yticks()], fontname="Bebas Neue")
ax.set_xticks(locs)
ax.set_title("Heart Rate vs. DiffEq Exam", fontsize=22, fontname="Bebas Neue")
ax.set_ylabel("Heart Rate (BPM)", fontsize=14, fontname="Bebas Neue")
ax.set_xlabel("Time", fontsize=14, fontname="Bebas Neue")
# Make nicer x axis
plt.gcf().autofmt_xdate()
# Export that shit
ax.grid()
plt.draw()
plt.savefig("figure.png", format='png', dpi=280)
if __name__ == "__main__":
main()
```

]]>`auto`

, lambda expressions, `constexpr`

s, and the `default`

and `deleted`

keywords. My lecture notes can be found here (alternatively, under projects).]]>`auto`

, lambda expressions, `constexpr`

s, and the `default`

and `deleted`

keywords. My lecture notes can be found here (alternatively, under projects).]]>`eqnarray`

. However, I feel as if the book didn’t covey the ideas as well as it should; also, it suggestion was `IEEEeqnarray`

, apposed to the classically recommended]]>`eqnarray`

. However, I feel as if the book didn’t covey the ideas as well as it should; also, it suggestion was `IEEEeqnarray`

, apposed to the classically recommended `align`

. This article on TeX User Groups conveys ideas more eloquently and provides better alternatives. ]]>`dev/null`

; except then you wouldn’t be able to get error messages (even if only pipe stdout to `dev/null`

and keep]]>`dev/null`

; except then you wouldn’t be able to get error messages (even if only pipe stdout to `dev/null`

and keep stderr). So, I made a solution that handles the compiling by piping error messages to `less`

. The syntax of `latexerr FILENAME`

will compile the files. Passing `--clean`

will delete temporary LaTeX files.
```
#!/bin/bash
# USAGE: ./script FILE --clean --glossary
extensions_to_delete=(gz fls fdb_latexmk blg bbl log aux out nav toc snm glg glo xdy)
compile_and_open() {
argument="$1"
auxname="${argument%.tex}.aux"
errors=$(pdflatex -shell-escape -interaction=nonstopmode -file-line-error "$argument" | grep ".*:[0-9]*:.*")
if [[ -n $errors ]]; then
echo "$1 Errors Detected"
echo "$errors" | less
else
open_file $1
echo "$1 Compile Successful"
fi
}
open_file() {
filename=`echo "$1" | cut -d'.' -f1`
open "$filename.pdf"
echo "$filename Opened"
}
# http://tex.stackexchange.com/questions/6845/compile-latex-with-bibtex-and-glossaries
glossary() {
compile $1
makeglossaries $1
compile $1
compile $1
}
clean() {
for file in $(dirname $1)/*; do
filename=$(basename "$file")
extension="${filename##*.}"
filename="${filename%.*}"
for bad_extensions in "${extensions_to_delete[@]}" ; do
if [[ $bad_extensions = $extension ]]; then
rm $file
echo "$file Deleted"
fi
done
done
}
main() {
compile_and_open $1
if [ "$3" = "--glossary" ]; then
glossary $1
fi
if [ "$2" = "--clean" ]; then
clean $1
fi
}
main "$@"
```

Updated version(s) will be posted here.

]]>As of the time of

]]>As of the time of this writing, there are currently approximately 30,000 deaths due to cars in the United States. To put said number is perspective, one is more likely to be in a fatal vehicular accident than be a victim of homicide, overdose from heroine, or be injured from an intentional fire. Even worse, those three statistics combined cause more deaths, per year, than a vehicle does. These deaths aren't at the fault of mechanical error, animal intervention, or even dangerous weather conditions: they are caused by humans. Approximately 94% of traffic accidents are *primarily caused by humans*. This begs the question, how much safer can an algorithmic, non-self aware autonomous vehicle be, compared to a human?

Because self driving cars are in their infancy (with Carnegie Mellon having the first record of self-driving technology dating back to 1984), it would seem to be almost unfair to compare such a computer to a human driver; or so it would seem. Although humans had a century head start, car projects have *already* proven to be sufficiently better drivers. Google’s well known self-driving car project (and their PR department) can attest to such a claim:

"We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That's two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving — and still, not once was the self-driving car the cause of the accident." Jacquelyn Miller, Google Spokeswoman

Google is not the only company who is interested; Tesla’s autopilot software is expected to be fully autonomous (and commercially available) come 2017. Along with Google and Tesla, BMW, Mercedes Benz, and Ford have made public claims to be working on self driving car capabilities. There is no dispute; self driving cars will be radically safer in comparison to human drivers, and they will be available soon. Assuming the standard S-curve technological adoption, price to significantly drop, and luddite-esque apposition, the adoption should be mainstream within fifty years at most. Now, what about a hundred years?

Within a hundred years, self driving cars should be illegal; this is not a moral dilemma, this is a "saving over 3000 lives a day" solution. Every time a person gets in a car and chooses to drive and causes an accident, that person chose to cause the accidents. It would come to no surprise that people from the 22nd century that would look at the 21st century driving as "barbaric". When strictly looking at the statistics and the logical arguments, driving in the 22nd century would not only be morally impermissible, it would be illegal.

]]>`#!/bin/sh`

or `#!/bin/bash`

). What you might not have known that `rm`

is also a valid interpreter. So the following shell script:
```
#!/bin/rm
echo "Hello, world"
```

would simply interpret to be

]]>`#!/bin/sh`

or `#!/bin/bash`

). What you might not have known that `rm`

is also a valid interpreter. So the following shell script:
```
#!/bin/rm
echo "Hello, world"
```

would simply interpret to be removed. Useful? No. Hilarious watching someone trying to debug? Definitely.

]]>You might be familiar with ternaries as `condition ? do something : do something else`

, and they become quite useful in comparison to the standard if-else. However, if you’ve dealt with ternaries a lot, you might have noticed that ternaries also return lvalues/rvalues. Now, as the name suggests suggests, you can assign to lvalues (lvalues are often referred to as locator values). So something like so is possible:

```
std::string x = "foo", y = "bar";
std::cout << "Before Ternary! ";
// prints x: foo, y: bar
std::cout << "x: " << x << ", y: " << y << "\n";
// Use the lvalue from ternary for assignment
(1 == 1 ? x : y) = "I changed";
(1 != 1 ? x : y) = "I also changed";
std::cout << "After Ternary! ";
// prints x: I changed, y: I also changed
std::cout << "x: " << x << ", y: " << y << "\n";
```

Although it makes sense, it’s really daunting; I can attest to never seeing it in the wild.

An interesting fact about C++ bracket operator, it’s simply pointer arithmetic. Writing `array[42]`

is actually the same as writing `*(array + 42)`

, and thinking in terms of x86/64 assembly, this makes sense! It’s simply an indexed addressing mode, a base (the beginning location of array) followed by an offset (42). If this doesn’t make sense, that’s okay. We will discuss the implications without any need for assembly programming.

So we can do something like `*(array + 42)`

, which is interesting; but we can do better. We know addition to be commutative, so wouldn’t saying `*(42 + array)`

be the same? Indeed it is, and by transitivity, `array[42]`

is exactly the same as `42[array]`

. The following is a more concrete example.

```
std::string array[50];
42[array] = "answer";
// prints 42 is the answer
std::cout << "42 is the " << array[42] << ".";
```

This one has the least to say, and could cause the most damage. The C++ standard allows for hidden white space in identifiers (i.e. variable names, method/property names, class names, etc.). So this makes the following possible.

```
int number = 1;
int number = 2;
int number = 3;
std::cout << number << std::endl; // prints 1
std::cout << number << std::endl; // prints 2
std::cout << number << std::endl; // prints 3
```

Using `\u`

as a proxy for hidden whitespace character, the above code can be re-written as such:

```
int n\uumber = 1;
int nu\umber = 2;
int num\uber = 3;
std::cout << n\uumber << std::endl; // prints 1
std::cout << nu\umber << std::endl; // prints 2
std::cout << num\uber << std::endl; // prints 3
```

So if you’re feeling like watching the world burn, this would be the way to go.

]]>Gabriel's Horn is thus: suppose you have the function $y = \frac{1}{x}$ where $x \in \mathbb{R}

]]>Gabriel's Horn is thus: suppose you have the function $y = \frac{1}{x}$ where $x \in \mathbb{R}^+, 1 \leq x \leq \infty$, rotated around the $x$ axis; not too difficult to conceptualize, it looks like a horn of sorts. But here's the paradox.

Suppose we want to calculate the volume. Simple enough, using solids of revolution, we can show the volume to be:

$$V = \pi \lim_{t \rightarrow \infty} \int _1 ^t \frac{1}{x^2} dx = \pi \lim _{t \rightarrow \infty} ( 1 - \frac{1}{t} ) = \pi $$

A simple, elegant solution; we can expect the volume to be exactly $\pi$. So, let's see about the surface area.

We know the general definition of the arc length to be $\int _a ^b \sqrt{1 + f'(x)^2}$, so combining this with our solids of revolution, we should get

$$A = 2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{1}{x} \sqrt{1 + \left( -\frac{1}{x^2} \right)^2 } dx $$

However, this is not a trivial integral; however, there is a trick we can do. Suppose we take the integral $$2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{dx}{x}$$ instead, and we can prove this integral will always be equal to or smaller than the former integral (because of the disappearance of $\sqrt{1 + (-\frac{1}{x^2})}$). So, taking this rather trivial integral, we can see that

$$ A \geq 2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{dx}{x} \implies A \geq \lim _{t \rightarrow \infty} 2\pi \ln(t) $$

Wait a minute; it's divergent! So we know the volume $V = \pi$, but the surface area $A \geq \infty$. This is no mistake, the math is valid. And that is one of the most counter-intuitive things I have ever ran into.

A horn you can fill with paint, but you can't paint the surface.

]]>