# Frenetic Array

## A canvas for logic and imagination.

The most frustrating thing about compiling LaTeX in a terminal is the wall of text that gets written to stdout. An easy solution would be piping to dev/null; except then you wouldn’t be able to get error messages (even if only pipe stdout to dev/null and keep stderr). So, I made a solution that handles the compiling by piping error messages to less. The syntax of latexerr FILENAME will compile the files. Passing --clean will delete temporary LaTeX files.

#!/bin/bash
# USAGE: ./script FILE --clean --glossary

extensions_to_delete=(gz fls fdb_latexmk blg bbl log aux out nav toc snm glg glo xdy)

compile_and_open() {
argument="$1" auxname="${argument%.tex}.aux"
errors=$(pdflatex -shell-escape -interaction=nonstopmode -file-line-error "$argument" | grep ".*:[0-9]*:.*")

if [[ -n $errors ]]; then echo "$1 Errors Detected"
echo "$errors" | less else open_file$1
echo "$1 Compile Successful" fi } open_file() { filename=echo "$1" | cut -d'.' -f1
open "$filename.pdf" echo "$filename Opened"
}

# http://tex.stackexchange.com/questions/6845/compile-latex-with-bibtex-and-glossaries
glossary() {
compile $1 makeglossaries$1
compile $1 compile$1
}

clean() {
for file in $(dirname$1)/*; do
filename=$(basename "$file")
extension="${filename##*.}" filename="${filename%.*}"

for bad_extensions in "${extensions_to_delete[@]}" ; do if [[$bad_extensions = $extension ]]; then rm$file
echo "$file Deleted" fi done done } main() { compile_and_open$1

if [ "$3" = "--glossary" ]; then glossary$1
fi

if [ "$2" = "--clean" ]; then clean$1
fi

}

main "$@"  Updated version(s) will be posted here. For a particular class, I had to write a paper titled "Something that would considered morally impermissible in 22nd century". Immediately, I thought of self driving cars. Although it's wasn't a particular well written paper, it does make a good case for self driving cars. As of the time of this writing, there are currently approximately 30,000 deaths due to cars in the United States. To put said number is perspective, one is more likely to be in a fatal vehicular accident than be a victim of homicide, overdose from heroine, or be injured from an intentional fire. Even worse, those three statistics combined cause more deaths, per year, than a vehicle does. These deaths aren't at the fault of mechanical error, animal intervention, or even dangerous weather conditions: they are caused by humans. Approximately 94% of traffic accidents are primarily caused by humans. This begs the question, how much safer can an algorithmic, non-self aware autonomous vehicle be, compared to a human? Because self driving cars are in their infancy (with Carnegie Mellon having the first record of self-driving technology dating back to 1984), it would seem to be almost unfair to compare such a computer to a human driver; or so it would seem. Although humans had a century head start, car projects have already proven to be sufficiently better drivers. Google’s well known self-driving car project (and their PR department) can attest to such a claim: "We just got rear-ended again yesterday while stopped at a stoplight in Mountain View. That's two incidents just in the last week where a driver rear-ended us while we were completely stopped at a light! So that brings the tally to 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving — and still, not once was the self-driving car the cause of the accident." Jacquelyn Miller, Google Spokeswoman Google is not the only company who is interested; Tesla’s autopilot software is expected to be fully autonomous (and commercially available) come 2017. Along with Google and Tesla, BMW, Mercedes Benz, and Ford have made public claims to be working on self driving car capabilities. There is no dispute; self driving cars will be radically safer in comparison to human drivers, and they will be available soon. Assuming the standard S-curve technological adoption, price to significantly drop, and luddite-esque apposition, the adoption should be mainstream within fifty years at most. Now, what about a hundred years? Within a hundred years, self driving cars should be illegal; this is not a moral dilemma, this is a "saving over 3000 lives a day" solution. Every time a person gets in a car and chooses to drive and causes an accident, that person chose to cause the accidents. It would come to no surprise that people from the 22nd century that would look at the 21st century driving as "barbaric". When strictly looking at the statistics and the logical arguments, driving in the 22nd century would not only be morally impermissible, it would be illegal. If you've ever written a shell script, you might top your script with your interpreter (i.e. #!/bin/sh or #!/bin/bash). What you might not have known that rm is also a valid interpreter. So the following shell script: #!/bin/rm echo "Hello, world"  would simply interpret to be removed. Useful? No. Hilarious watching someone trying to debug? Definitely. After using a fairly large, matured language for a reasonable period of time, find peculiarities in the language or the libraries is guaranteed to happen. However, given it’s history, I have to say C++ definitely allows for some of the strangest peculiarities in it’s syntax. Below I list three that are my favorite. ### Ternaries Returning lvalues You might be familiar with ternaries as condition ? do something : do something else, and they become quite useful in comparison to the standard if-else. However, if you’ve dealt with ternaries a lot, you might have noticed that ternaries also return lvalues/rvalues. Now, as the name suggests suggests, you can assign to lvalues (lvalues are often referred to as locator values). So something like so is possible:  std::string x = "foo", y = "bar"; std::cout << "Before Ternary! "; // prints x: foo, y: bar std::cout << "x: " << x << ", y: " << y << "\n"; // Use the lvalue from ternary for assignment (1 == 1 ? x : y) = "I changed"; (1 != 1 ? x : y) = "I also changed"; std::cout << "After Ternary! "; // prints x: I changed, y: I also changed std::cout << "x: " << x << ", y: " << y << "\n";  Although it makes sense, it’s really daunting; I can attest to never seeing it in the wild. ### Commutative Bracket Operator An interesting fact about C++ bracket operator, it’s simply pointer arithmetic. Writing array[42] is actually the same as writing *(array + 42), and thinking in terms of x86/64 assembly, this makes sense! It’s simply an indexed addressing mode, a base (the beginning location of array) followed by an offset (42). If this doesn’t make sense, that’s okay. We will discuss the implications without any need for assembly programming. So we can do something like *(array + 42), which is interesting; but we can do better. We know addition to be commutative, so wouldn’t saying *(42 + array) be the same? Indeed it is, and by transitivity, array[42] is exactly the same as 42[array]. The following is a more concrete example.  std::string array[50]; 42[array] = "answer"; // prints 42 is the answer std::cout << "42 is the " << array[42] << ".";  ### Zero Width Space Identifiers This one has the least to say, and could cause the most damage. The C++ standard allows for hidden white space in identifiers (i.e. variable names, method/property names, class names, etc.). So this makes the following possible.  int n​umber = 1; int nu​mber = 2; int num​ber = 3; std::cout << n​umber << std::endl; // prints 1 std::cout << nu​mber << std::endl; // prints 2 std::cout << num​ber << std::endl; // prints 3  Using \u as a proxy for hidden whitespace character, the above code can be re-written as such:  int n\uumber = 1; int nu\umber = 2; int num\uber = 3; std::cout << n\uumber << std::endl; // prints 1 std::cout << nu\umber << std::endl; // prints 2 std::cout << num\uber << std::endl; // prints 3  So if you’re feeling like watching the world burn, this would be the way to go. There are many things in the world of mathematics and physics that are really quite unintuitive — however, I am not sure there will be anything more unintuitive to me than Gabriel's Horn. Gabriel's Horn is thus: suppose you have the function$y = \frac{1}{x}$where$x \in \mathbb{R}^+, 1 \leq x \leq \infty$, rotated around the$x$axis; not too difficult to conceptualize, it looks like a horn of sorts. But here's the paradox. Suppose we want to calculate the volume. Simple enough, using solids of revolution, we can show the volume to be: $$V = \pi \lim_{t \rightarrow \infty} \int _1 ^t \frac{1}{x^2} dx = \pi \lim _{t \rightarrow \infty} ( 1 - \frac{1}{t} ) = \pi$$ A simple, elegant solution; we can expect the volume to be exactly$\pi$. So, let's see about the surface area. We know the general definition of the arc length to be$\int _a ^b \sqrt{1 + f'(x)^2}$, so combining this with our solids of revolution, we should get $$A = 2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{1}{x} \sqrt{1 + \left( -\frac{1}{x^2} \right)^2 } dx$$ However, this is not a trivial integral; however, there is a trick we can do. Suppose we take the integral $$2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{dx}{x}$$ instead, and we can prove this integral will always be equal to or smaller than the former integral (because of the disappearance of$\sqrt{1 + (-\frac{1}{x^2})}$). So, taking this rather trivial integral, we can see that $$A \geq 2\pi \lim _{t \rightarrow \infty} \int _1 ^t \frac{dx}{x} \implies A \geq \lim _{t \rightarrow \infty} 2\pi \ln(t)$$ Wait a minute; it's divergent! So we know the volume$V = \pi$, but the surface area$A \geq \infty\$. This is no mistake, the math is valid. And that is one of the most counter-intuitive things I have ever ran into.

A horn you can fill with paint, but you can't paint the surface.