Hacker News new | past | comments | ask | show | jobs | submit | fullwedgewhale's comments login

Not being BSD license I would expect it never to get beyond the ports tree.


Yup, that is my worry too.


So having done cost estimation before, this is horrible. About all it would be good for is the initial swag at brochure ware. Generally speaking cost estimates should never be a fixed point and should be done in a range, with assumptions (that when violated) would allow the cost numbers to change. In addition, there's no measurement of complexity through requirements. Basically all these estimators are useful for only the simplest software projects with extremely limited requirements and scope. The cost of the software is a function of the known requirements which are carefully enumerated during costing to generate some sort of complexity model. (For example, function point analysis). That's fed into a cost model that translates that complexity model into a dollar range. That interval is then used to generate a final, contractual, number. But notice all the work that went into that estimate. What's more is that the estimate is refined during the work such that new data is incorporated into the complexity model, allowing changes in scope or requirements to be re-priced.


To be clear, this does not replace an official estimate (we also do not like fixed estimates and prefer agile pricing). What it does give though is ballpark ideas for people with no information at all. Oozou is a 6 year old agency, we understand the problem deeply but wanted to give people something to get started without full blown proposals, RFP's etc


I view it as dangerous because it spits out a definitive answer without clear explanation of the model used to generate the estimate. If you look at other cost estimation tools (even closed source tools) will explain the model and assumptions used to generate the estimate. You can then make the determination if the model and assumptions apply to you.

It also doesn't indicate the kind of application (which is probably a largely stand-alone application). You can't stop some clients from going on to these sites and looking up a cost and not understanding what goes into an estimate. What would be even worse, in my mind, is to have other developers use a tool like this to generate a cost and then fail because they grossly underestimate complexity.


It does explain the model - each feature is given a certain number of developer days that are billed out at $450/day. It's all broken down if you click "show calculations".

I personally think their model is pretty shitty, because if you build an app as a checklist of features you'll end up with something users won't want to use, and if you polish all the rough edges and make sure everything integrates well together, you will spend a lot more time on integrations than on implementing the features. (My rule of thumb - across many projects - is that when a project is feature-complete, it is usually between 40-50% shippable. When it ships, it is about 50% "done", where "done" means that all the initial development tasks required to make it stable and useful to users are complete. When it's "done", it's consumed roughly 10% of the calendar time and 50% of the developer time that will ultimately be invested in the project.)

But you can at least judge the model they're using for yourself.


A ballpark estimate should be given within a range.

If you aren't communicating the probability of it either being less or more expensive, then you are setting a false expectation.


I've also done a ton of cost estimation and I thought this was a reasonably good approach for capturing the "core cost" of development of an app.

What it doesn't capture is testing beyond just design-code-unit-test such as multi-device testing, user testing, user acceptance testing, etc. and all of the change management that goes hand-in-hand with that.


I think that argument is only correct if you have fairly well understood pieces of work. I'm going to build an another on-line social recipe app. People type in recipes, they share those recipes, make comments on those recipes, maybe review the recipes, etc. I think their metrics break down very quickly when you start doing more interesting things like, taking automatically finding similar recipes based on the descriptions of the steps in the recipes. Or maybe performing a recommendation of recipes based on a user's rating of recipes.

Or, for that matter, correctly internationalizing the recipes and conversion of measurement units. (You don't want to have add 3.43272 cups of flour to 0.2347 gallons of water, when translating metric to imperial - you'll need logic to scale up/down quantities and round within a given tolerance - which is often narrow when you start baking pastries). Or there might be a service for this specific example that I'm not aware of, but the point is when you go beyond a very narrow scope, I believe this tool breaks down.


Very true. It doesn't leave any room for exploration of requirements or the usual customer waffling.


But what if it's 200% bigger? I remember reading an article that looked at the time it takes to load MS office under Windows. Even though computers over the last 15-20 years or so are much, much faster, the wall-clock time to get Word or Excel up and running has pretty much stayed constant. Granted, disks haven't kept pace with CPUs, but when your computer is 100 times faster and it still takes just as long... Is that a good trade off?


One thing that strikes me is the explosion in dependencies in most software. I'm guilty of this, too. I've seen plenty of examples where an entire library or framework is added to a project just for a couple of features. Add a few libraries like that and suddenly you have a few megabytes of additional libraries, where maybe 90% or 95% of the features will never be used. A good article a while back looked at common unix utilities, comparing the size of commands like cp from the 1980's to the present. Most of the bloat had to do with features that almost nobody ever uses. It wouldn't be so bad if everyone used the same set of libraries. For example, almost all applications have a dependency on certain core libraries like libc.

But we often use different libraries that do essentially the same thing or different versions of the same library, so instead of 1 copy of libfoo.jar, I have 2 copies of libbar.jar and 4 copies of libfoo.jar that may all do essentially the same thing. Then I have essentially the same functionality in C++ (some libraries that wrap collections), Python (where maybe one of they python versions wraps one of the C++ libraries, but a different version). And of course I have a version installed in each ruby environment. Add to that their dependencies, and the dependency's dependencies, and you have a perfect storm of craptastic. So libfoo.jar version 1.2.3 depends on libbaz.jar 2.3.4 which depends on libqux 1.5.7. Let's say each one is 250k, and all I ever used was some list sorting utility in libfoo.

But I don't know what we could really do about it. You can't force everyone to program in C++ or limit them to a set of blessed libraries. I think maybe developers could be more judicious about when they could add a few lines of code and when they actually need to bring in a hard dependency an an external library. And it happens with commercial software as well. Maybe this is just the way the world will be.


This is one of the biggest sources of bloat. In the node and ruby ecosystems especially, dependencies proliferate exponentially, where an application pulls in 12 libraries, each of which pull in 12 of their own, which each pull...

Downloading the depedencies for Ghost, the node blogging platform with the explicit goal of simplicity and minimalism, takes me minutes.

Compare this with the status quo when writing programs in C, where you might link to 4 libraries total, one of which pulls in 2 others as dependencies.

I've come to suspect that the super convenient package managers that all the "modern" languages have are at fault for this.


And you're giving what would be described as the exact opposite example to the parent post: why do node projects have hundreds of dependencies? Because those dependencies do exactly 1 thing most of the time (and usually pull in some other exactly 1 thing dependencies to do it).


That may be true in Node, but as a counter example, in Ruby, I saw dependencies creep into projects where there would be some minor point like "I need to do X" and that's done by library Y. In addition, Gem Y does A, B, C and D. In order to do all that it drags in several dependencies which are not directly needed. When you actually look at X, you realize it's not that difficult to do yourself. So at what point should you just write the functionality yourself and at what point do you rely on external libraries (and any baggage they can bring with them.) You have to maintain that code (even if it is fairly trivial), but you then have to maintain your dependency (keeping the gem up to date, maybe making small code changes to accommodate breaking changes in the gem). It can be a real mess.


Dart makes a decent argument for a smarter compiler. They've implemented "Tree Shaking" in their compiler: essentially cross-library dead code elimination.

This would probably be quite tricky in Java land where reflection does add new entry points, but it could be used to solve the problem of "I only need this one function from this library, don't compile in anything else".

I was personally quite surprised when I ported from code from Node to Java/Groovy and the resulting shaded JAR was > 70MB, I think at some point it peaked above 110MB. I don't know what I changed, but it's down to 35MB now. But the code that we've written in house on that codebase boils down to 1MB. But besides figuring out that I don't want to make local builds I scp to staging (because scp is terrible), these numbers are all completely equivalent for writing server-side software that runs on dedicated machines.

We could certainly make it more efficient, but there's exactly zero business case for it.


You can only tree-shake a whole program compilation, but then you cannot use compilation units, modules and modularity efficiently. You have to choose one or the other.

Every normal compiler implements simple (i.e. module level) dead-code elimination already.

EDIT: Of course you could use static libs, which does pull in only used symbols, but then you cannot share them across apps and update independently.

I implemented a tree shaker for my lisp and was very happy with it, esp. for delivery. Like Go does it nowadays.


Right, I guess I wasn't clear, this was a shaded/fat jar, so it had all its dependencies included statically.

I feel like our computing infrastructure has gotten to the point that dynamically linked libraries are no longer a good choice. I think dynamic linking has only caused us problems at work (devs install Node deps on the staging server, forget to tell ops, service crashes when deployed in prod), and the memory/disk/transfer overhead are practically irrelevant at this point. The only remaining reason to have dynamic libs is the idea that they can be updated without help from upstream, but that really only works if the software is compatible with the latest libraries, which isn't always true.

Supposedly ProGuard has some cross-module dead code elimination for JARs, but I haven't tried it: http://proguard.sourceforge.net/


> I ported from code from Node to Java/Groovy

How much of that was Java's fault and how much Groovy's, I wonder?


Given my own code was only 1MB total, neither. 97% of the size was from external dependencies. FWIW, it was a "shaded" jar that had all its dependencies linked in statically.


A good article a while back looked at common unix utilities, comparing the size of commands like cp from the 1980's to the present. Most of the bloat had to do with features that almost nobody ever uses.

That was actually a talk called "Bloat: How and Why UNIX Grew Up (and Out)": https://www.youtube.com/watch?v=Nbv9L-WIu0s

Well worth a viewing.


We can write tools that stop using idiotic ideas like dynamic libraries and only link in symbols that apps need.

If you only use one or two features out of a lib, why are you dynamically linking them in? If you do a static link, the linker can at least remove most of the bloat you don't use.


I think top talent competes globally so it's able to either move to where it makes more money or demand higher rates. It's the anonymous guy in the cube whose skills may range from incompetence to excellent, but you don't know what you're getting at first.


I think one difference that I've seen is that the US has a very different attitude toward computer programming. In the US it's more glamorous? That's not the right word, but it seems like when I talk to a number of foreign developers who emigrated to the US, it's more about being a good job than it is about being a genius hacker, rich by 25, or being a technical tour de force. I see a lot more women overseas who become programmers. Maybe because the culture is different, and becoming a programmer is like becoming an architect or engineer?

I think that different culture might self-select US developers the same way top schools produce highly motivated, successful people because they recruit highly motivated, successful kids. Not to say every US developer wants to climb to the top of some technical or economic mountain, but it seems like maybe there's a smudge more passion among US developers. Also, intelligent, math oriented individuals have other avenues in the US, like finance, (and unlike the rest of the world) dentistry and medicine are lucrative careers. Meaning you choose programming more because you want to program, but in other countries it's because it's a better paying job.

I've also noticed that in some countries, once you've done a few years writing code, you quickly want to join management ranks and develop a coding allergy. More so than in the US, where it seems like 50 year old developers still want to write code. I get the sense that, in some countries, if you don't get into management then you are a failure at some level. So in the US you can find someone with 10+ years experience developing software, but in other countries you just have people who've stagnated and never moved up.

I dunno.


I am an Indian developer and I feel that you're onto something there. This needs more discussion.

For eg: An executive an HCL , a leading Indian outsourcing firm called American developers "unemployable" :

>> He says students from countries like India, China, and Brazil are more willing to put the effort into "boring" details of tech process and methodology, such as ITIL, Six Sigma, etc.

http://www.dailytech.com/CEO+of+Microsofts+Indian+Partner+Co...

So basically , American devs want challenging work and technical growth , while Indian devs are happy to even have a job.


There is definitely something to this. From my experience(American), nobody writes their specs.


>Also, intelligent, math oriented individuals have other avenues in the US, like finance, (and unlike the rest of the world) dentistry and medicine are lucrative careers.

Dentistry and medicine are very lucrative careers in India.


But not all over the world. In some countries, even in Europe, you can be a physician and do 'okay,' but being a computer programmer would be better.


Excellent suggestion. I highly second that. Yes, even though more and more vendors are making that harder. For example, claiming that reinstalling the OS voids your warranty.


Because then they wouldn't benefit from the crapware they installed, so they need to disincent you.

Not running crapware is theft, just like not watching commercials is theft. The only way they can afford to sell you a PC at those prices is by subsidizing their profit with crapware income. /s

http://www.freerepublic.com/focus/news/676651/posts


Google puts out free or cheap services and devices because they're trying to draw users to their platform, which gathers metrics about usage. These feed into the services, but they also feed in to Google's need to be a better advertising platform. Although we think of Google as the company behind the eponymous search engine and Android phone, it's actually an advertiser. I'm not sure if it's still in the 95% range, but it's still high. I have a 7" Nexus tablet which (for the time) was a great tablet at an insanely good price. But I knew I was essentially giving up information about what I read, what I listen to, and what apps I used in exchange for the free services and discounted device.

In contrast my Apple devices aren't cheap, and there is less collection of data with the intent of selling my attention to third party advertising clients, but they are more expensive and I pay an iCloud subscription fee. For the most part the Mac software doesn't suck and works reasonably well. I don't love everything about it but it's more appealing than Picasa for photos. I do pay for extra storage on Google to back up my pictures (over 100 gigs of family photos). But I am definitely paying more Apple in a direct sense (buying computers and phones + iCloud) than to Google (Google Play and 1 device). Apple's margin on hardware is high, but they make a quality product (for the most part).

What I give Google is information about myself, my work, my family, the music I listen to, the books I read, and anything and everything I ever searched for. They're pretty open and transparent about this, and I pretty much understand what I'm turning over in exchange for their free stuff. See the Google dashboard on you. If there's an application or service hosted by Google, I pretty much understand their intent is to extract data from that service or application.

That works for most people, even though they really don't understand this exchange or think about it. Free is a powerful lure and most consumers have come to expect not to pay anything at all for software and services. That's why they download a free game and keep playing it even when they find out it sends the contents of their address book off to some weird holding company. It's also why they get suckered into free games (because 4.99 is just too expensive on an App store) and then spend multiples of that amount in in-game purchases. It's why they're willing to turn over even extremely intimate details of their life for free e-mail or messaging services.

But the incentive for Google is not to make great software or hardware because that's how they make their money. Their incentive is to roll out services which help them collect information to feed their need for information about your tastes and behaviors. If that photo app you're using doesn't really give them more information, there's no incentive to make it better. Apple has an incentive to make software good enough to support their device/computer sales. Microsoft (arguably) has an incentive to make software good enough for manufacturers to license it. Apple and Microsoft, no matter how much people love to shit on them, actually have more straight forward, simple motives. They have an actual financial incentive to care about the quality of the product they deliver because you are paying them (directly or indirectly).

tl;dr Google only makes software good enough to draw you in to collect data. Other vendors are selling yous something and therefore have an interest in making that thing better.


> Although we think of Google as the company behind the eponymous search engine and Android phone, it's actually an advertiser.

Don't forget about Play Store revenue. They're making money hand over fist by taking a cut of app sales, which is enough to financially justify Android. Oh, and the media sales too, which don't even require Android.


Beautifully well said.

When I realized all this, I switched to hosted paid email service and moved off using Google services. Last thing now was selling the Android phone and getting the alternative, where at least you get value and support for your money.


I have to think one major issue for a German startup (for example) would be that their ideas might not work in France because of French laws which might prohibit some feature of that German service. So, the Market for a French firm is (at least initially) France. That's significantly smaller than the US. Coupled with the fact if feels like most US managers look at a laptop as a basic tool while many European managers seem to regard it as a tool for their subordinates.


The problem I have with the talk is that he's focusing on one narrow part of the economy and then using that as a proxy for the whole economy. It's also obvious he doesn't look at some of the negatives related to income inequality. For example, high levels of income inequality correlate with high levels of political instability and high levels of corruption. Countries with high income inequality are rarely gleaming examples of economic efficiency. They're often not very nice places to live. Countries with low levels of inequality (like much of Northern Europe) are nice places to live.

As Paul understands that we're all connected (you can't pull on one part of the bed-sheet with out pulling on the rest of the bed-sheet), he hasn't made the connection that if one side of the bed is dragged into being a shit-hole, the rest of the bed will eventually be a shit-hole, as it's all connected. In South America, for example, kidnapping is such a huge problem. The effect is to make these places less nice for the wealthy as well as the poor.

A lot of times the policies that reduce inequality aren't kill all the start-ups/crazy take all the money from rich people policies. Often their simple things that don't take a ton of money like head start funding, lower the cost of post-secondary education and training, and providing certain basic services like health care. What the US has done is to look at balancing the budget by reducing taxes on the very wealthy (upward pressure on income inequality) while taking money from programs that reduce equality.

There's a belief that most people who are poor are not working. Actually, many of them work quite hard. But washing machines work hard, too. Simply working hard doesn't get you ahead. In some cases there's a fair amount of luck involved. I sometimes look at the startup economy a little bit like pulling the slot lever. You can do everything right, but a lot of things still have to go well (some of which are outside of your control) before you cash out. Paul rightfully calls this risk, meaning that it's not guaranteed that an investor will get his pay out and it's far from guaranteed a founder will get rich. But it's bizarre to think keeping the US from sliding into an economic hell-hole will somehow destroy that slot-machine economy.


"Often their simple things that don't take a ton of money like head start funding, lower the cost of post-secondary education and training, and providing certain basic services like health care."

What is 'head start'?


http://eclkc.ohs.acf.hhs.gov/hslc http://en.wikipedia.org/wiki/Head_Start_Program

It's an early childhood learning program. Nothing's perfect but it has a pretty good track record.


thx @fullwedgewhale


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: