Hacker News new | past | comments | ask | show | jobs | submit | DukeBaset's comments login

Hi, do you have any idea how one goes about making a liquid mode for mobile pdf readers like the adobe pdf app? I am often commuting and have to read on my mobile but I am limited by the fact that adobe only allows 200 pages or less files to be rendered to liquid mode. I would be interested in helping develop this app ( I am not a programmer by profession but have been learning for a few years on my part time)


Sorry I have no idea about that. But a quick and dirty fix for your problem could be to split your pdf file into multiple files each with less than 200 pages.


What does a Top Coder mean? Its people who write the Stack Overflow answers and not the ones who copy them.


I mean other than when invading poor, impoverished countries in third world for reasons full of deceit, corruption and material gain, when has Western media ever lied to us?


What I wonder is why do poor wage slavers like you enjoy tounging the boot so much?


Well, three responses:

1. I have an addiction to food and shelter and my parents seem to have a problem taking care of someone who is almost 50.

2. According to DQYDJ, I’m in the 97th percentile of income earners [1]. I am not bragging, a college grad 5 years out of school would be too as an SDE2 at any major tech company.

3. Are you independently wealthy or do you also exchange labor for money?

[1] https://dqydj.com/income-percentile-calculator/


This software is working as intended. Its supposed to be a screen. Its doing that well. I know this sounds crass and crude but that is capitalism for you.


The real problem with dominant software solutions is their scale. In the worst case that means that if you don't make the criteria for one job, you don't make it for any job. Even if you fail for something stupid. From the companies perspective you loose a lot of diversity which is a vicious cycle.

Maybe attaining monopoly-like status and convergence on global scale is part of capitalism. But hopefully though enough businesses will realize though that they are leaving money on the table and capitalism will incentivize the development of new solutions (software or not) to unlock this opportunity. But it will take time.


> This software is working as intended. Its supposed to be a screen. Its doing that well.

Perhaps - in the same way that a foot surgeon that amputates everything below the neck is doing that job well.


I am just a lay man interested in the field, but like it sort of makes sense that you can have logical rules and inference, like I mean the whole point of AI as usually conceptualized was that they understood logic, like insert your fav Dr Who/Star Trek joke here, but briefly speaking what happened? I get the charm of ML/DL but it's essentially like data fitting or something. Wouldn't you expect logic in AI? What went wrong and what is the state of the art now?


Artificial Intelligence, as a field of research, was basically created by a man named John McCarthy, who came up with the name and convened the first academic workshop on the subject (in Dartmouth, in 1956). John McCarthy (who is also known as the creator of Lisp) was the student of Alonzo Church, who gave his name to the Church-Turing Thesis and is remembered for his description of the Lambda calculus, a model of computation equivalent to universal Turing machines. These are all logicians, btw, mathematicians steeped in the work of the founders of the field of mathematical logic as it was developed mainly in the 1920's by Frege, Hillbert, Gödel, Russel and Whitehead, and others (of course there was much work done before the 1920's, e.g. by Boole, Leibnitz... Aristotle... but it only really got properly systematised in the 1920's).

So it makes sense that the field of AI was closely tied with logic, for a long time: because its founder was a logician.

There were different strains of research in subjects and with goals adjacent to and overlapping with AI, as John McCarthy laid them down. For example, statistical pattern recognition or cybernetics (the latter intuition I owe to a comment by Carl Hewitt here, on HN). And of course we should not forget that the first "artificial neuron", described by Pitts and McCulloch in 1938 (long before everything else of the sort) was a propositional logic circuit, because at the time the models of human cognition were based on the propositional calculus.

The problem with McCarthy's programme, of logic-based AI, is that logic is full of incompleteness and NP-hardness results, and the more expressive power you demand from a logic language, the more of those endless pits of undecidability and despair you fall down. So work in the field advanced veerrryyy slooowwllyyy and of course much of it was the usual kind of incremental silly-walk stuff I point out in the comment above. To make matters worse, while significant academic results drip-dripped slowly and real-world applications were few and far between, you had prominent researchers overpromising and launching themselves into wild flights of fancy with abandon based on very early actual results (certainly not McCarthy! and I don't want to point fingers but cough Marvin Minsky cough).

Then this wretched dog by the name of Sir James Lighthill :spits: was commissioned by the British Science Research Council, the holders of the Spigot of the Tap of Funding, to write a report on the progress of AI reserach and the report was full of misunderstandings and misrepresentations and the Spigot was turned off and logic-based AI research died. Not just logic-based AI- Lighthill's :spits: report is the reason the UK doesn't have a robotics sector to speak of today. Then, while all this was happening in the US and Europe, the Japanese caught the bug of logic-based AI and they decided to spend a bunch of money to corner the market for computers like they had with electrical devices and automobiles, and they instituted something called the Fifth Generation Computer project (special-purpose computer architectures to run logic programming languages). That flopped and it took with it one of the few branches of classical AI that had actually delivered, automated theorem proving.

The first link I posed in the comment above is to a televised debate between Lighthill on the one side and on the other side John McCarthy, Donald Michie (dean of AI in the UK) and er, some other guy from cognitive science in the US, I always forget his name and I'm probably doing him a disservice. You may want to watch that if you are curious about what happened. Pencil pushers without an inkling of understanding killed logic-based AI research is what happened. That was a bit like the comet that killed the dinosaurs and gave the mammals a chance. Meaning those adjacent to AI research directions, like probabilistic reasoning, connectionism and pattern recognition found an opening and they took their chance and dominated research since then. I am well aware of the connotations of my own metaphor. What can I say? Dinosaurs are cool. Ask any five-year old to explain why.


Thank you for the detailed reply. Sadly killing off Logic based AI isn't the only crime we can attribute to pencil pushers. But I had some intuition I wanted to ask about.

Can a sort of vaguely type theory based - and I mean like in the sense of programming be used to reason about law and the like?

Like suppose some article says a man should pay 20% income tax but it says in some other act of some other law that a man shall pay 30% income tax. Like obviously I'm just giving an example but I mean like long cryptic legalese busting. Can we detect contradictions or show that the law or agreement esp like say something like TPP or whatever is inconsistent, by defining types and transactions?

I'm sorry I couldn't word it better. But I hope you get a gist of what I'm saying.


I haven't really looked into that kind of thing. There's work I'm aware of in legal argumentation with logic programming and First Order Logic in general, for example:

https://link.springer.com/article/10.1007/BF00118494

A quick search on the internet also turns up this review that looks like it may have some information relevant to your question:

https://rd.springer.com/chapter/10.1007%2F3-540-45632-5_14


Multiple redundant servers? I'm way out of my depth here, but if a spof is your main problem, perhaps you just need some redundancy?


Perhaps the app itself can use ml to detect flag and prevent known images from being sent...


"Why is Whatsapp using up my phone's battery?"


I'm opposed to this on other grounds, but computing and checking a single hash for each image wouldn't be that big a burden.


Agreed. You mentioned ML though, and that's a different matter.

Checking hashes is definitely viable, but only works for known good examples.


Hopefully Google drinks the Kool Aid and kills itself instead of going after one head at a time.


Perhaps the LI devs could post open for work on their profile pictures now...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: