It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions.
But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation.
> paired with new notation
The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression.
You stole the words from my mouth!
One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs.
If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate.
APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches.
I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient.
> One of the issues DSLs give me is that the process of using them invariably obsoletes their utility.
That means your DSL is too specific. It should be targeted at the domain, not at the application.
But yes, it's very hard to make them general enough to be robust, but specific enough to be productive. It takes a really deep understanding of the domain, but even this is not enough.
>If you can manage to fluidly update your DSL design along the way, it might work
Forth and Smalltalks are good for this. Self even more so. Hidden gems.
APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10
If this is indeed this simple and this obvious, why didn't other languages followed this way?That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9)
I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with.
Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static.
"APL is like a diamond. It has a beautiful crystal structure; all of its parts are related in a uniform and elegant way. But if you try to extend this structure in any way - even by adding another diamond - you get an ugly kludge. LISP, on the other hand, is like a ball of mud. You can add any amount of mud to it and it still looks like a ball of mud." -- https://wiki.c2.com/?JoelMosesOnAplAndLisp
Because it's domain specific.
If you push this into every kind of application, you will end-up with people recreating objects with lists of lists, and having good reasons to do so.
some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing.
I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those.
if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures.
Good point. Notation matters in how we explore ideas.
Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didnβt like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him.
And he never stopped. Later, he invented entirely new ways to think about physics tied to how he expressed himself, like Feynman diagrams (https://en.wikipedia.org/wiki/Feynman_diagram) and slash notation (https://en.wikipedia.org/wiki/Feynman_slash_notation).
Douglas Hofstader has a funny anecdote about superscript and subscript
21:30 @ Analogy as the Core of Cognition
> Notation matters in how we explore ideas.
Indeed, historically. But are we not moving into a society where thought is unwelcome? We build tools to hide underlying notation and structure, not because it affords abstraction but because its "efficient". Is there not a tragedy afoot, by which technology, at its peak, nullifies all its foundations? Those who can do mental formalism, mathematics, code etc, I doubt we will have any place in a future society that values only superficial convenience, the appearance of correctness, and shuns as "slow old throwbacks" those who reason symbolically, "the hard way" (without AI).
(cue a dozen comments on how "AI actually helps" and amplifies symbolic human thought processes)
Let's think about how an abstraction can be useful, and then redundant.
Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous.
For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress.
And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born.
This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things.
The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend.
> But are we not moving into a society where thought is unwelcome?
Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves.
There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite.
Same goes for sml/haskell and lisps (at least to me)
Pushing symbols around is what mathematics is all about.
I think you will like this short clip between Brian Green and Barry Mazur.
Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in...
This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will.
APL desperately needs its renaissance. Original vision was hand-written, consistent, and executable math notation. This was never accomplished.
If you are into this, read ahead: https://mlajtos.mu/posts/new-kind-of-paper
> APL could have had much bigger impact and solve far more problems than spreadsheets ever will.
APL is a symbolic language that is very unlike any other language anyone learns during their normal education. I think that really limits adoption compared to spreadsheets.
That's true of the original APL design but later incarnations simplified it considerably.
Spreadsheets were unstable & cumbersome to debug if longer than a sheet, very slow iterative convergence and encouraged sloppy unwieldy coding, but of course, excelled at the presentation level. This resulted in endless number of "FORTRAN/C++ REPL" tools emerging to fill the gap.
To appreciate the revolutionary design of APL & its descendants, notice most of industrial tools that emerged in 90s & 2000's emulated it under the hood - MATLAB/sage, Mathematica, STATA/R/SAS, Tableau, and even CERN ROOT/Cling - in trading & quant finance Q/Kdb+ is still SOTA.
As I understand it, Dyalog gives away their compiler, until you put it in production. You can do all your problem solving in it without giving them any money, unless you also put the compiled result in front of your paying customers. If your solution fits a certain subset you can go full bananas and copy it into April and serve from Common Lisp.
The thing is, that APL people are generally very academic. They can absolutely perform engineering tasks very fast and with concise code, but in some hypothetical average software shop, if you start talking about function ranking and Naperian functors your coworkers are going to suspect you might need medical attention. The product manager will quietly pull out their notes about you and start thinking about the cost of replacing you.
This is for several reasons, but the most important one is that the bulk of software development is about inventing a technical somewhat formal language that represents how the customer-users talk and think, and you can't really do that in the Iverson languages. It's easy in Java, which for a long time forced you to tell every method exactly which business words can go in and come out of them. The exampleMethod combines CustomerConceptNo127 from org.customer.marketing and CustomerConceptNo211 from org.customer.financial and results in a CustomerConceptNo3 that the CEO wants to look at regularly.
Can't really do that as easily in APL. You can name data and functions, sure, but once you introduce long winded names and namespaced structuring to map over a foreign organisation into your Iverson code you lose the tersity and elegance. Even in exceptionally sophisticated type systems in the ML family you'll find that developers struggle to do such direct connections between an invented quasilinguistic ontology and an organisation and its processes, and more regularly opt for mathematical or otherwise academic concepts.
It can work in some settings, but you'll need people that can do both the theoretical stuff and keep in mind how it translates to the customer's world, and usually it's good enough to have people that can only do the latter part.
> Can't really do that as easily in APL.
This doesn't match my experience at all. I present you part of a formal language over an AST, no cover functions in sight:
pβͺβi β t k n pos end(β£βͺI)ββi β node insertion
iβi[βp[iββΈ(t[p]=Z)β§pβ β³β’p]] β select sibling groups
mskβ~t[p]βF G T β rzβp I@{msk[β΅]}β£β‘β³β’p β associate lexical boundaries
(nβ-symβ³,Β¨'ββ')β§(β p)<{β΅β¨β΅[p]}β£β‘(tβE B) β find expressions tainted by user input
These are all cribbed from the Co-dfns[0] compiler and related musings. The key insight here is that what would be API functions or DSL words are just APL expressions on carefully designed data. To pull this off, all the design work that would go into creating an API goes into designing said data to make such expressions possible.In fact, when you see the above in real code, they are all variations on the theme, tailored to the specific needs of the immediate sub-problem. As library functions, such needs tend to accrete functions and function parameters into our library methods over time, making them harder to understand and visually noisier in the code.
To my eyes, the crux is that our formal language is _discovered_ not handed down from God. As I'm sure you're excruciatingly aware, that discovery process means we benefit from the flexibility to quickly iterate on the _entire architecture_ of our code, otherwise we end up with baked-in obsolete assumptions and the corresponding piles of workarounds.
In my experience, the Iversonian languages provide architectural expressability and iterability _par excellence_.
Java, C# are good for these kind of situation where you want to imitate the business jargon, but in a technical form. But programming languages like CL, clojure, and APL have a more elegant and flexible way to describe the same solution. And in the end easier to adapt. Because in the end, the business jargon is very flexible (business objectives and policies is likely to change next quarter). And in Java, rewriting means changing a lot of line of code (easier with the IDE).
The data rarely changes, but you have to put a name on it, and those names are dependent on policies. That's the issue most standard programming languages. In functional and APL, you don't name your data, you just document its shape[0]. Then when your policies are known, you just write them using the functions that can act on each data type (lists, set, hash, primitives, functions,...). Policy changes just means a little bit of reshuffling.
[0]: In the parent example, CustomerConceptNo{127,211,3) are the same data, but with various transformations applied and with different methods to use. In functional languages, you will only have a customer data blob (probably coming from some DB). Then a chain of functions that would pipe out CustomerConceptNo{127,211,3) form when they are are actually need (generally in the interface. But they be composed of the same data structures that the original blob have, so all your base functions do not automatically becomes obsolete.
You left out that exampleMethod will of course belong to a Conway's law CustomerConceptManager object. I think this is one of the reasons that software as a field has taken off so well in recent decades while more conventional physical-stuff engineering has stagnated (outside of lithography to support... software) - you can map bureaucracy onto software and the bureaucratic mindset of "if I think I should be able to do something, I should just be able to tell people to figure out how" has fewer hard limits in software.
The base concept is related to other useful ones.
The Sapir-Whorf hypothesis is similar. I find it most interesting when you turn it upside down - in any less than perfect language there are things that you either cannot think about or are difficult to think about. Are there things that we cannot express and cannot think about in our language?
And the terms "language" and "thought" can be broader than our usual usage. For example do the rules of social interaction determine how we interact? Zeynep Tufekci in "Twitter and Teargas" talks about how twitter affords flash mobs, but not lasting social change.
Do social mechanism like "following" someone or "commenting" or "liking" determine/afford us ways of interacting with each other? Would other mechanisms afford of better collective thinking. Comments below. And be sure to like and follow. :-)
And then there is music. Not the notation, but does music express something that cannot be well expressed in other ways?
> in any less than perfect language there are things that you either cannot think about or are difficult to think about.
Not they they are difficult to think about, rather, it would have never occurred to you to think about it in the first place. As a person who learned multiple foreign languages, there many things that I can think about only in a certain language but not my native language (English). For example there are many meanings of βΠ³ΡΠ»ΡΡΡβ in Ukrainian and Russian that are not captured in English, and I never thought about those meanings before I learned Ukrainian and Russian.
ΠΡΠ»ΡΡΡ literally means βto walkβ but it is used to mean so much more than that. It also means to seek experiences, including sexual experiences. A person might complain they got married too early because βΠ½Π΅ Π½Π°Π³ΡΠ»ΡΠ»ΡΡβ or βdidnβt walk enoughβ. While similar English expressions like βsow his wild oatsβ are used in English, it affects my thoughts differently to have so much meaning in the verb βto walkβ. It literally changes how I think about walking through life.
Similarly when I learned Arabic, there are many meanings and thoughts that I only have in that language and would take entire essays to explain in English. Not because it canβt be explained in English (it can) but the notation is just not there to do it succinctly enough.
I love your comment! I can tell that you're a tangential referential thinker, and I assume the downvotes are more from people frustrated that the bounds of the thought-space are being fuzzed in ways frustrating to their minds.
Metaphor and analogy are in a similar spirit to what you are speaking of. The thinking they ground offer a frustrating inability to contain. Some people love to travel through language into other thoughts, and some are paralysed by that prospect.
As always, success is in the balance and bothness :)
Anything, thanks for sharing!
Despite this being very obviously true to any mathematician or computer scientist, this idea is incredibly controversial among linguists and "educators".
The linguistic analogue, (although the exact example of notation does solidly fit into the domain of linguistics) is the so called sapir-whorf hypothesis, which asserts that what language you learn determines how you think.
Because natural languages are cultural objects, and mapping cultures into even a weak partial order (like how a person thinks) is simply verboten in academia.
This has profound consequences in education too, where students are disallowed from learning notation that would allow them to genuinely reason about the problems that they encounter. I admit to not understanding this one.
Sapir whorf doesnt negate the possibility of forming the same ideas using more primitive constructs.
The ability of any language speaker being able to learn the same mathematics or computer program goes to show that.
Id contest that spoken/written language is even necessary for thinking. At the very least, there is a large corpus of thought which does not require it (at some point humans spoke no or very little words, and its their thought/intention to communicate that drove the formation of words/language), so its silly to me to think of learned language as some base model of thought.
When you learn another language (mathematics, programming), the sapir whorf hypothesis no longer makes the same predictions.
I've had arguments on the sapir-whorf idea before. Sucks, as I'm not familiar with the environment that it originated, but it seems that people seemed to have taken an encoding idea and expanded it to experience, writ large.
That is, people will lay claim that some societies that have the same word for the color of the sea and the color of grass to indicate that they don't experience a difference between the two. Not just that they encode the experiences into memories similarly, but that they don't see the differences.
You get similar when people talk about how people don't hear the sounds that aren't used by their language. The idea is that the unused sounds are literally not heard.
Is that genuinely what people push with those ideas?
The argument of notation, as here, is more that vocabulary can be used to explore. Instead of saying you heard some sound, you heard music. Specific chord progressions and such.
I think they push the opposite...mainly a weak Sapir Whorf, but not a strong one. You still have a human brain after all. Do people in Spain think of a table as inherently having feminine qualities because their language is gendered? Probably to some very small amount.
There is a linguist claiming a stronger version after translating and working with the piriue (not spelling that right) people. Chomsky refuses to believe it, but they can't falsify the guy's claims until someone else goes and verifies. That's what I read anyway.
Edit: Piraha people and language
Funny, as I was seeing a lot of people try and push that the gender of language is fully separate from gender of sex. The idea being that there was no real connection between them. I always find this a tough conversation because of how English is not a heavily gendered language. I have no idea how much gender actually enters thinking in the languages that we say are gendered.
My favorite example of this used to be my kids talking about our chickens. Trying to get them to use feminine pronouns for the animals is basically a losing game. That cats are still coded as primarily female, despite us never having a female cat; is largely evidence to me that something else is going on there.
I'm curious if you have reading on the last point. Can't promise to get to it soon, but I am interested in the ideas.
I am currently developing a project in APL - it was in my backlog for a long time but I'm actually writing code now.
I phrase it that way because there was a pretty long lag between when I got interested and when I was able to start using it in more than one-liners.
But I discovered this paper early in that process and devoured it. The concepts are completely foundational to my thinking now.
Saying all this to lead to the fact that I actually teach NAATOT in an architecture program. Not software architecture - building architecture. I have edited a version to hit Iverson's key points, kept in just enough of the actual math / programming to illustrate these points and challenge the students to try to think differently about the possibilities for the design and representation tools they use, eg their processes for forming ideas, their ways of expressing them to themselves and others, and so forth.
If I had the chance (a more loose, open-ended program rather than an undergraduate professional program) one of my dreams is to run a course where students would make their own symbolic and notational systems... (specifically applied to the domain of designing architecture - this is already pretty well covered in CS, graphic design, data science, etc.)
I really wish I finished my old Freeform note taking app that complies down to self contained webpages (via SVG).
IMO it was a super cool idea for more technical content thatβs common in STEM fields.
Hereβs an example from my old chemistry notes:
https://colbyn.github.io/old-school-chem-notes/dev/chemistry...
I see you Show HN post, this is brilliant. https://news.ycombinator.com/item?id=25474335
What system are you using now?
After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write.
It's not as extreme but I feel similarly every time I write dense numpy code. Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
For some reason the reality is unintuitive to me - that the other tools would have taken me far longer. All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner.
I think it's like climbing a steeper but much shorter path. It feels like more work but it's actually less. (The point of my rambling here is that I probably ought to learn APL and use it instead.)
Should you ever decide to take that leap, maybe start here:
https://xpqz.github.io/learnapl
(disclosure: author)
I have been reading through your site, working on an APL DSL in Lisp. Excellent work! Thank you.
Stefan, I tried many different entry points to actually start learning APL. (I have a project which is perfectly fit to the array paradigm).
Yours is by far the best. Thank you for it.
Looks wonderful, thanks for sharing your work!
Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs.
People actually managed to channel the APL hidden under numpy into a full array language implemented on top of it: https://github.com/briangu/klongpy
I don't know APL, but that has been my thought as well - if APL does not offer much over numpy, I'd argue that the I'd argue that later is much easier to read and reason through.
> It's not as extreme but I feel similarly every time I write dense numpy code.
https://analyzethedatanotthedrivel.org/2018/03/31/numpy-anot...
>Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool.
I think there is also a psychological bias, we feel more "productive" in a more verbose language. Subconsciously at least, we think "programmers produce code" instead of thinking "programmers build systems".
Any examples you can share?
> Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context.
I personally disagree to the premise of this paper.
I think notation that is separated from visualization and ergonomics of the problem has a high cost. Some academics prefer a notation that hides away a lot of the complexity which can potentially result in "Eureka" realizations, wild equivalences and the like. In some cases, however, it can be obfuscating and be prone to introducing errors. Yet, it's a important tool in communicating a train of thought.
In my opinion, having one standard notation for any domain/ closely related domains is quite stifling of creative, artistic or explorative side of reasoning and problem solving.
Also, here's an excellent exposition about notation by none other than Terry Tao https://news.ycombinator.com/item?id=23911903
This feels like the types programming vs. none typed programming.
There are efforts in math to build "enterprise" reasoning systems. For these it makes sense to have a universal notation system (Lean, Coq, the likes).
But for a personal exploration, it might be better to just jam in whatever.
My personal strife in this space is more on teaching: Taking algebra classes, etc. where the teacher is not consistent nor honest about the personal decision and preference they have on notation - I became significantly better at math when I started studying type theory and theory of mechanical proofs.
I have to admit that consistency and clarity of thought are often not implied just by the choice of notation and have not seen many books and professors putting effort to emphasize on its importance or even introducing it formally. I've seen cases where people use fancy notation to document topics than how they think about it. It drives me nuts, because the way you tell the story, you hide a lot how you arrived there.
This is why I picked so well on the exposition by Terry Tao. It shows how much clarity of thought he has that he understands the importance of notation.
The problem the article is talking about is that those different notations are used for super basic stuff that really do not need any of that.
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.