https://en.wikipedia.org/wiki/56_Leonard_Street
821′ tall Jenga tower
https://en.wikipedia.org/wiki/56_Leonard_Street
821′ tall Jenga tower
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
QT:{{”
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent
optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.[1]
“}}
SGD by selecting randomly just one pt.
https://en.wikipedia.org/wiki/Polsby%E2%80%93Popper_test#cite_note-3 QT:{{”
The Polsby–Popper test is a mathematical compactness measure of a shape developed to quantify the degree of gerrymandering of political districts. The method was developed by lawyers Daniel D. Polsby and Robert Popper,[1] though it had earlier been introduced in the field of paleontology by E.P. Cox.[2]
“}}
4*Pi*A/P^2
Ellenberg, J., & Images, G. (2014, May 30). The wrong way to treat child geniuses. The Wall Street Journal.
https://www.wsj.com/articles/the-wrong-way-to-treat-child-geniuses-1401484790
https://en.wikipedia.org/wiki/Flatland
QT:{{”
Following this vision, the Square is visited by a sphere. Similar to the “points” in Lineland, he is unable to see the three-dimensional object as anything other than a circle (more precisely, a disk). The Sphere then levitates up and down through Flatland, allowing the Square to see the circle expand and contract between a great circle and small circles. The Sphere then tries further to convince the Square of the third dimension by dimensional analogies (a point becomes a line, a line becomes a square).
“}}
QT:{{”
The U.S. Electoral College can be understood as analogous to a simple neural network (specifically, a single-layer network or a “perceptron” with a specific aggregation function) because it involves a two-layer decision-making process with weighted inputs and a final binary output.
Analogy Breakdown
Input Layer (Voters/Popular Votes): Individual votes cast within each state represent the initial inputs. These inputs are aggregated at the state level.
Weighted Connections (Electoral Votes per State): Each state is assigned a specific number of electoral votes (its “weight”), which is based on its representation in Congress. States with larger
populations have more “weight” in the final decision.
Hidden Layer / Processing Unit (State Tally and “Winner-Take-All”): In most states, the candidate who wins the majority of the popular votes in that state receives all of its assigned electoral votes (the “winner-take-all” system). This functions like a processing unit with a specific activation function: the output for a state is a single, unified signal (all its electoral votes) for one candidate.
Output Layer (The Presidency): The total number of electoral votes from all states are summed up. The candidate who reaches the threshold of 270 or more electoral votes (out of 538 total) wins the presidency. This is the final output of the system.
“}}