Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree.

Time to time I hire entry level people. Their writing in general, the primary method of communication is atrocious. I am not complaining about fundamental grammar or punctuation. I make those mistakes more frequently then most myself.

There is no cohesive flow of thought; there is a lack of logical structure. They are unable to unpack their justification for recommendations. This costs us because we have to have multiple "draft reviews" until the document is succinct, or cost us when the recommendations are misunderstood.

Over the last three decades I have seen the decline in the skill of writing.



  I make those mistakes more frequently then most myself.
*than

...this checks out.


Bahahaha! Yes. Thank you. Of course I can respond that English is not my native tongue, but that would be just an excuse, not excusable.


So automate. If computers are better than students at writing essays, they're probably better than entry-level hires at writing whatever documents you need.


The purpose of writing in professional settings, especially internal documents for a engineering team, is to convey information.

Even if models are better at writing essays, it's highly unlikely that the generated essays will convey useful and accurate information. Ie, the writing may be better as a context-free composition of words and symbols, but the semantic content of the writing within the context of the business will be at best nonsense and possibly even misleading.

In GP's context, excellent writing that's pure bullshit is even worse than bad writing.

Generating design documents that discuss real tradeoffs from a combination of email threads, slack messages, meetings, and code is quite a bit different from generating the billionth essay on Napoleon. We use the latter, in part, to practice the basic skills required for the former. But just because a model can do a half-decent job at the latter doesn't mean that it is anywhere near being able to do the former.


I haven't seen a proper document (as in "documentation") for years. The API code I have to link to is auto-documented by compiling together comments in the code base. The language documents are written by humans, probably, but in a style that a computer could easily replicate (if they can do better than a high-schooler essay, definitely).

90% of the writing tasks I've encountered in the last 10 years is content generation for content marketing. And failing that entire industry dying in a fire (which would be a good thing), having that auto-generated by AI would be simple and no loss to humanity.

> it's highly unlikely that the generated essays will convey useful and accurate information

My MBA essays didn't do this either, so I'm not sure what the problem is


> I haven't seen a proper document (as in "documentation") for years.

I am sure that is true for many types of code. But also, for many projects, you can train up a reasonably productive coder in 3-6 months of bootcamping. In those cases, you probably don't need much more than auto-generated documentation. But also, in those cases, I'm not sure what a GPT-like model is doing other than creating the facade of written documentation around a product and process that doesn't really need anything other than the most superficial of manually written documentation.

On the other hand, there are many cases where actual written documentation is quite necessary.

I used to work on extremely high-trust code bases, and now I mostly work on mid and late stage large "Big R, Big D" projects.

In both cases, written documentation is extremely important for communicating the whats and whys of implemented code. Auto-generating API documentation would be pretty useless without a very literal programming style of development (think tens or hundreds of pages of math-heavy documentation for every KLoC or so, and in rare cases an order of magnitude more.) And even then would be awkward in many cases; eg, in cases where there's close interfaces between the software and product-specific custom hardware (either chips or sensors or actuated mechanical components). Or especially mathematics-heavy portions of programs where you really need to read a couple dissertations before you can start reasoning about the implementation details of the methods.

Additionally, we do/did a lot of writing for non-technical/differently-technical audiences. E.g., patent attorneys (who need thorough descriptions for translation into effective patentese), colleagues who interfaced directly with safety regulators, colleagues who interfaced with lawmakers, internal documents that might one day be seen by lawmakers or counsel, communication with external technical stakeholders when working on either greenfield or evolving standards, etc.

A strong foundation in basic writing skills is very important when you need to communicate the same basic facts to a variety of technical audiences who all have different concerns, incentive structures, and backgrounds.

> My MBA essays didn't do this either, so I'm not sure what the problem is

I'm stunned, I tell you! Stunned! /s ;-)


How are computers going to write that without understanding? A two sentence prompt isn't going to give you what you need.


I refer to the article "computers are better than students at writing essays".

It can't be both: either AI is now better at writing than people, or it isn't. If it is, then let's use it to write everything that it can. If it's not, then we don't have a problem.


If people are bad a structuring an argument and conveying their intent in writing, what makes you think they are better with those things when speaking? or any other form of communication.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: