Godel, Escher, Bach covers this in great detail. Symbols are a weird idea in the human brain, and there are multiple pseudo languages throughout the book that when you really sit down and think about it, aren't any less ridiculous than the notation we use for mathematics outside of that book.
My idea-book (is that a crazy person thing?) is probably a lot like what you're talking about. I use words for my main data, different arrows going in between each sub-idea, and then sometimes more words attached to each arrow ("pub/subs", "queries", "happens once/happens always", etc)
I think that people who think visual programming could be a thing ought to sit down with APL for a few weekends, until they have the epiphany of "oh, the code is the same as the notation for what I'm actually doing" and suddenly they realize that notation and code are interlinked, and each are basically useless without the other, in the same sort of way that x--p--p---- is exactly as valid as 2 + 2 = 4, without context.
I get the allure of NOT having to write code, but it shouldn't be so difficult for people to realize that it's a ridiculous fantasy.
Write fibonacci(N) in ANY visual language, and tell me you couldn't have done it easier, faster, and more coherently in python or whatever. It's obvious.
> Write fibonacci(N) in ANY visual language, and tell me you couldn't have done it easier, faster, and more coherently in python or whatever. It's obvious.
Honestly, I think most languages have shitty syntaxes (syntaxi?) and that the issue is not about visual programming.
The other problem is that everybody who has a big idea for visual programming has the same, actually shitty, idea.
Drag-n-drop blocks that represent logic.
UML is not a thing anymore because it's useless and wastes time.
Visual programming is great for describing high-level logic, but not low-level semantics.
Try and describe your last trip to the grocery store with just emojis. Maybe you might get creative and come up with a paper for your list, some fruits and stuff like that, but how could you guarantee that the user on the other end of your description gets the exact same idea? You can't.
With words, I can write my comment how I want, and I'm more or less guaranteed that people will read it the way that I intended them to, because it wouldn't make any sense any other way.
Obvious epiphany to be had here is that you cannot use the government to subsidize your investment in S&P. At least not legally, and to my knowledge.
At college, you can live a pretty comfortable life. You get a place to live, food to eat, work to do, material to learn, friends to talk to, supervisory staff to help you with hard problems, etc.
I hate to say it, because I think US higher education costs are absolutely ridiculous, but I still think that higher education is more rewarding than working a salaried job for a couple years to dump some money into S&P. I'd even go so far as to argue that people who have graduated a college or university are more likely to be prepared to safely, responsibly handle problems that occur in the real world moreso than someone who just happens to have a lot of money from S&P. I know that's a bit vague, but I think my point is clear enough that having wealth is not a replacement for having a satisfactory, stimulating wealth of knowledge and a sense of purpose for one's life. Not to knock people in finance, I love finance, but it's not for everybody and if the whole next generation of college students just decided to structure their whole lives based on how much money they're LIKELY (ie, not necessarily guaranteed) to make, then it'll be a boring generation that the next one has to learn from.
Student loan debt regulations need a rethink. How can we:
a) not lose money from paying for kids' school
b) not fuck kids over, incurring hundreds of thousands of dollars of debt for admittedly mediocre education (compared to some euro schools, or asian schools)
c) keep new system automated enough to reduce politician's potential for corruption by manipulating the money
and d) provide some sort of transition schedule to move current students (paying traditional loans off) to new system, without fucking over agencies/firms that are rightfully owed
The reason I mentioned it is that the condition may have been the reason that Hayes did not 'see' the boundary between a bank taking into account its advantage when arriving at its own Libor submission on the one hand, and the wholesale cooperation between several banks and brokerages on the other. That boundary was largely social and not documented/codified.
I would think anyone even remotely familiar with economics would realize how significant even small groups of agents and how they can have rippling effects across an economy.
Maybe there was a lapse in judgment somewhere, but that doesn't excuse him from doing what he did, and that lapse can't justify his actions, which caused HUNDREDS of billions of dollars of misleading money to move around.
However, I think he was too harshly sentenced, and I think these "make an example out of him" cowboy judges only exasperate the situation even more. If I were committing global-scale fraud right about now, I'd spend even more time and take even more caution to make sure that I can't be caught or unmasked. Hayes will serve as an inspiration to future fraudsters, you can count on that.
That wasn't just an idea. People have used Twitter for exactly this purpose for a long time, because it skews user numbers. I'd also assume that overhead is very low for c&c botnets, since its ultimately just a little pubsub message queue, so there's even less incentive to shut them down. I get at LEAST three followers a day with "buy RTs, followers" in their profile on my personal Twitter.
Spam botnets ain't nothing new. They'll ways be around.
Do you know very much about GPUs? Would but be viable to implement a kanren on top of cuda or something like that? Wouldn't that lend kanren insane performance gains?
not too surprisingly, a lot of logic programs don't parallelize very well at all because of very linear dependencies (control flow).
and some do a lot of largely independent but very regular work that would execute quite well on a simd/vector/smt array.
people have come up with some tricks to map control flow into simd (like some really cool parser tricks), but i think in general those have regimes where they have sub-serial performance.
so maybe? if you had the magic compiler? or you provided some manual annotation support? or a robust ffi?
for sql, which has a much more limited footprint, there's been some cool vectorization work.
My idea-book (is that a crazy person thing?) is probably a lot like what you're talking about. I use words for my main data, different arrows going in between each sub-idea, and then sometimes more words attached to each arrow ("pub/subs", "queries", "happens once/happens always", etc)
I think that people who think visual programming could be a thing ought to sit down with APL for a few weekends, until they have the epiphany of "oh, the code is the same as the notation for what I'm actually doing" and suddenly they realize that notation and code are interlinked, and each are basically useless without the other, in the same sort of way that x--p--p---- is exactly as valid as 2 + 2 = 4, without context.
I get the allure of NOT having to write code, but it shouldn't be so difficult for people to realize that it's a ridiculous fantasy.
Write fibonacci(N) in ANY visual language, and tell me you couldn't have done it easier, faster, and more coherently in python or whatever. It's obvious.