Posts

Showing posts from July, 2025

7/22/2025

Image
 nvm google deepmind also got gold  on the imo. they should test these models on the ioi too 🤔 tori! in diff geo we are learning about parametrized surfaces! this site says that a torus can be covered with one surface patch   and thought of like a rectangular piece of rubber stretched around... but in our exercise we parametrize it like so and we need 4 patches to make sure we're working with open sets. i saw some other websites saying you need at least 2 patches?!? so idk. btw this 3d plotting calculator is great for visualizing things -  https://c3d.libretexts.org/CalcPlot3D/index.html there are also different types of tori frenet-serre equations these just seem a normal triplet of equations that form the orthonormal basis for the curve (called the frenet-serre frame), but they're incredibly useful for solving problems to investigate properties for curves. t = tangent vector, n = normal vector, b = binormal vector rubiks cubing i've been trying to cube recent...

7/20/2025

Image
this past week was the international math olympiad!! it was exciting to see the performance of people i've met in real life, and they all did very well, though i'm not a fair judge especially since i'm not that good at oly math. also, what's funny is that i thought there was no way ai would win a gold, and initial reports from matharena.ai showed that it couldn't even achieve a bronze medal... and then the day after openai just had to tell everyone that they achieved a "not very open" model that could get 35/42 (coordbashing geo lol). the proofstyle is very strange... see in this github repo , almost like the ai has developed its own way of checking itself as it proceeds down the proof. people on r/singularity cheered! it will definitely be interesting to see the future of math competitions now, but it could be just like chess, after all ai solving these problems is not really an apples to apples comparison to the students who do math contests in general, ...

7/12/2025

 okayy so i was at a math camp and it was very fun! hm the favorite math thing i learned at camp would probably be functional inequalities since i've never seen them before and it is non-geo (oops). back to the exploration jungle tho! i finally read a post about sparse autoencoders although it was very confusing, and here are some main takeaways 1. dictionary learning of features - creating a sparser dictionary such that linear combinations of its elements make up the activations of a layer, we want to encourage sparsity (fewer dictionary features are needed to reconstruct activations, this increases interpretability and efficiency) 2. but this is still hard, we have things like feature oversplitting (splitting features that should be cohesive) and infinite-width cookbook (memorizing examples such that inputs are directly put into dictionary). some other sparsity metrics include different l norms also encounter issues like shrinkage and load balancing. 3. choosing an activation fu...