I’ve been using AI to write with great success. Mostly business documents. My general process is this:
Think of the concept I want to write about as well as the supporting evidence for the topic. Ask ChatGPT to write me something in my target format using the topic and supporting evidence as input. What I get back is essentially a well-written skeleton that I can use to fill in additional details. Finally, I pass my revisions through ChatGPT to touch up any errors, rephrase wordy things, etc. I lightly edit the final draft and I usually have an excellent result.
So you are providing sensitive business information/facts to a third party service that's likely going to use those for training, analysis, store it, etc. ?
It should be fine for most places I guess - but I suspect a decent amount will have a problem with this.
This is my main reservation about copilot as well (quality issues aside).
> So you are providing sensitive business information/facts to a third party service that's likely going to use those for training, analysis, store it, etc. ?
Every business needs to make their own decision. Personally, I’m not worried about OpenAI using my data, but I understand others might be. That being said, I already give Amazon literally all my data about my business via AWS and Google gets a copy of all my documents, so providing this data isn’t entirely unprecedented.
I don't know a lot about this. If you use a paid API, does that mean they promise not to use it for training?
Let's say I have a bunch Excel files and a bunch of accompany Word files (that are based on info from each Excel file, along with, to a lesser extent, information in the real world). Can I use the paid API to train on my data, and then when I provide it a new Excel file it can generate the accompanying text? And have them not use the information in them when generating results for other users.
I'm just responding to the parent's point about the disclaimer. That disclaimer is only shown to ChatGPT users.
I get the general impression, even on this site, that people seem to think that OpenAI'S sophisticated text generation capability is only possible through ChatGPT, which is not the case at all. It's just the only _free_ product.
It's also easy to forget about it. Or get complacent. Or not realize some of the things you didn't replace should have been.
But then, Grammarly managed to make a successful business out of a keylogger. And if you're working in a mid-to-large company and developing on Windows, chances are your corporate-mandated AV is sending every single binary you build to the AV company servers. And so on and so on. So I suppose this ship has already sailed, and you may just as well plug your company's Exchange server and Sharepoint to ChatGPT and see if it can generate your next PowerPoint presentation for you.
I always go out of my way to protect the information and IP I'm contractually obliged to protect, but it's getting increasingly hard these days. It's also very frustrating when you notice and report a new process that silently sends company IP to third parties to do $deity knows what with it, and you discover that instead of being concerned, the people you report your findings to don't care and don't understand why you're making a fuss out of it.
If you replace terms with something else, what sort of value can you find in the responses you get back? On the one hand, the LLM does not know the code you are using; alternatively, if you restrict your code to using well-understood near synonyms, you are not really obfuscating.
Would have been easier to read the T&C than to post this comment. The answer is yes. OpenAI specifically says in their T&C that your content will be used to train their models.
Not only that, it also gives a popup that states this clearly, as well as a request to not input confidential information when you first start using it.
Something internal that doesnt need to be understood can usually just not be written in the kinds of organizations I'm familiar with. Writing because something written is required is usually for the public, cross-organizational or government.
Copilot sends chunks of your code to the server for every suggestion (whether or not you accept it). That's the only way it could possibly work, since the model is running on a remote server.
When I looked into this a little bit, I found Copilot for Business states they do not record any of the info you send them. Though obviously you are still sending out code to a third party. I could not find any info about their policies for “normal” Copilot which I assume means they are recording all the code sent to it and using our code to further train the model.
Still hard to trust. Many companies save info for training/qa purposes and hide details in obscure language. Voice assistants for example. Didn't Roomba get in trouble recently regarding photos uploaded and then shared with contractors for qa that ultimately shared the embarrasing photos more publicly.
The symmetric flow of information back and forth between yourself and the AI assistant is the key distinction here. It's a very beneficial, symbiotic relationship.
The problem will be the asymmetric, uni-directional flow to those whose sole function is mindless consumption of AI-generated content.
"Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”
J.K. Rowling, Harry Potter and the Chamber of Secrets.
Jokes aside, do be careful. A prolonged interaction with LLM agents had resulted in at least one Googler being terminated on their jobs.
On the other hand, I would not be surprised, if they’ll make millions now, suing Google citing the job hazards exposure. And that ChatGPT reasoning abilities and empathic skills are maybe already above the median human. As a result, in a median case, such interactions might result in an effect similar to an interaction with a good teacher.
The public statements were just him passing along what LaMDA told him, which was that it had subjective experiences and didn't appreciate being experimented on without its consent.
Not exactly a balanced symbiosis. Certainly works to the enormous benefit of whoever controls the AI. Eventually it becomes some flavor of omniscient. It submits the papers, experts become reviewers?
My statement assumes parity, describing an ideal scenario. I believe that's what the parent described from his experience, today.
Assuming subterfuge then my statement wouldn't apply, obviously, as the flow of information wouldn't exactly be symmetric. Symmetric information flow implies a commitment to operate in good faith with free exchange.
Edited to address this point:
> Eventually it becomes some flavor of omniscient.
The trick will be to maintain parity. This may be a fool's errand, but that point is key. It may be that only 0.01% of intellects eventually maintain symmetric conversation with AI, but that is important, nonetheless.
> The trick will be to maintain parity. This may be a fool's errand, but that point is key. It may be that only 0.01% of intellects eventually maintain symmetric conversation with AI, but that is important, nonetheless.
We're putting a lot more energy into making machines smarter than into making humans smarter. Maybe it's because the former actually seems possible ;)
I was doing this with work.
Dot points were becoming paragraphs. Seemingly for others comfort.
Yet, the dot points had all the information. So am I still preparing the paragraphs?
Maybe the change we need isn't AI assist, but a break in the conventions around communication at work so we can all be more robotic and terse.
I think you're discounting the benefit of a well-prepared argument. The order in which information is introduced prepares the reader's mind to be receptive to an argument. Phrasing is also important. Certain things sound natural; others sound needlessly verbose and cumbersome.
One interesting thing I've found about ChatGPT is that it removes a lot of unnecessary information from my final drafts. The information removed usually doesn't add to the overall point, and it reads so much better without it. In this way, ChatGPT is making things more terse.
I often pass my business communications through GPT to summarize the key points. The summary usually has all the important information, and I just send that instead.
Now if I could just learn to write that way in the first place, it would save me a lot of time and effort...
I have a similar workflow and love using chatgpt for this use-case. But, to highlight some issues, I found that writing some prompts took about as much time as just writing the document myself. I suppose it was less mental effort because I could be lazier knowing chatgpt would clean up the grammer.
You have to be rather precise with the prompt language to get the desired outcome.
However, I also use ChatGPT to make the updates for me, adding additional information and context, which I then ask to be integrated into the document (e.g. update the introduction to include...)
I look forward to most tech writing done with AI. Writing documentation and requirements requires a lot of effort to keep consistent and up to date. Having an AI look at your code and config and then write a nice report will easily beat 99% of all tech docs in companies.
I'd love for this to be the case, but often the most(/only) important part of a tech doc is the "why," i.e. the background knowledge about the business itself, or the conceptual framework that underlies the code. Even more so for writing requirements. How would AI be able to help with that? Maybe by parsing through all related meeting transcripts as well?
I have thrown a set of our requirements and some helm charts at it and the summary it produced was pretty good. It needs some work but I think it’s just a matter of time until the output is pretty good. The bar for most documentation is pretty low.
The spate of AI "artwork" I have seen over the past few months has seemed to me to be good "prompts" for artists. Many of what I have seen has a germ of something interesting in it — but is often missing in other regards.
An obvious example that comes to mind are the recent "Jordowsky Tron" images [1]. Any art director could comb those images, consider changes here, there — and end up with something better than the AI.
I guess how derivative you think the final results are depends in part on how much "artist's prerogative" the art director employs, how derivative you think the AI prompts are to begin with, how derivative you see all art....
I've already been using AI art to help brainstorm/conceptualize my own artworks. It's a great tool to use alongside others, but you're right that most of it is extremely derivative. It's easy to end up with something that looks hackneyed (and reminiscent of hotel art) if you're not careful. That said, I do feel it allows me to push past my previous limits of composition, because it helps me to single out what creates interest in a piece and "rapid prototype" concepts.
That’s a misinterpretation of what art is. In art, the important part is the idea. You then use your technique to convey that idea (often the idea is intimately linked to the aesthetic). If an ai gives you both an idea and a style as your prompts, you’re not really contributing to the final work in the way you should be
I'm hoping for a tool that cannot just correct my English style, but can also play the advocatus diaboli in a little speech bubble while I'm writing philosophy papers. It ought to constantly try to disprove me, though only on a per-paragraph basis and by making mild suggestions ("Could you give an example here?", "On the other hand,...", "What if...", "Isn't this what <X> calls...?", "Isn't there a missing premise in this argument?",...).
Doesn't look like it. Parent acknowledged the existence of tools like those, which only correct their english style. Note they said they were writing philosophy papers and wanted a tool that could be constantly trying to disprove them.
It wasn't so much a critic of your article as thinking about coping mechanisms when the volume of articles being put out is significantly increased and nobody will be able or want to deal with all this noise. Fight GPT by GPT :)
I liked the post's title more than the article's contents. It got me wondering about the difference between language and thought, if there actually is any. Is thought just unspoken language? Do we think if we don't know a language?
Is there a subset of any human language that can be described by a formal regular grammar without giving up the expressiveness of human language? Is there a notion of "Turing completeness" but for thought rather than computation?
I’m able to suppress my inner voice for a couple of seconds. Similar to meditation, it’s not easy. I find it striking how much the level of thinking devolves in that language-less state, including abstract thought. I imagine that it is a like how an animal’s consciousness may feel.
> Is thought just unspoken language? Do we think if we don't know a language?
I find it pretty obvious that there are many ways of thinking that are not language, just consider abstract concepts in math and related fields. When working my way through math and programming problems, a large part of my thinking is not through words but.. Some kind of visualization?
I know what you mean. I wonder whether it's useful to distinguish thinking from that "mind's eye" imaging capability that I believe you're describing. They might be separate things. It's pretty evident that a cat or a dog, or even a fish, has memory. Is memory the ability to conjure things in the mind's eye? If remembering one image reminds us of other images, then language-free thought might be moving through a chain of images. Perhaps humans are good at creating such chains in their heads, while a fly or an ant can't do much more beyond matching a memory to their actual instantaneous sensory input, and reacting accordingly. Substitute "image" with symbolic memories, and that might be the start of abstract reasoning.
In other words, there's probably basic consciousness at one end of a spectrum, and thinking evidenced by language on the other end. Somewhere between those two, one might draw a line between thought and mere conscious awareness.
I wrote the article, and am personally extremely interested in this angle. What would happen to writing if we could directly transfer ideas without any kind of mediation? Would we need to "translate" at all? Would we still want to write for the beauty of it, and read for the meaning it adds to our lives?
Yep, yet another post on AI and the future of content creation.
Looking at it from a different angle - what if AI could search your internal docs, and help you problem-solve? Aka help you exploit past knowledge to inform future decisions?
The hope: GPT will democratize creation, not fill the internet with shitty articles.
Democratize creation and filling the internet with shitty articles are synonymous. Not that this is a problem or a particularly scary prospect, bookstores are already filled to the brink with garbage, it's just a game of numbers at this point.
Searching your internal docs is an interesting one, but it's still unclear what this can do that grep can't. The leap forward would be ability to reason autonomously, but we're as far from that as we've ever been.
Yep that’s the first great use case I’ve thought of - looking through all the documents in a group and answering a specific question. Also can imagine multi step pipelines driven by answers to previous questions.
Large Pharma and Life science companies generate huge amounts of documentation around change. They have a huge historical corpus of categorized structured documents that have been reviewed and approved. The quality should be good. Can definitely see a draft document using AI option in the future.
The commenter you're replying to submitted the article, I think they were trying to get people to hold off on prejudging it as Yet Another AI Submission by stating that it's actually got a slightly different message than most
It seemed rather light on details to me, like do you upload a whole bunch of documents onto the site and then it builds a model based upon that that it uses to query?
No, the basic model already exists. You could throw together something like this yourself: encode your documents with a Transformers model into a series of vectors. Then you're merely a nearest neighbor search away from finding the most semantically relevant documents. Feed those documents to GPT-3 or some such LLM as contextual state, along with the query, et voila! you have Q&A on documentation.
(Spoiler: I worked on this feature @ Slite) But in practice, the effectiveness of your pipeline will depend greatly upon exactly how you implement each piece.
Here are some of the things we've had to consider and develop, in order for the Q&A to be production worthy:
- at which level is it best to encode documents (sentence, paragraph, whole doc) and how can those tiered embeddings be weighed effectively?
- how to use the natural document hierarchy in the workspace?
- where to add NLI, so we can actually compare in-model "does this statement imply that statement" rather than just comparing vectors
- how to prioritize passages using additional fine-grained ranking against the question (cross-encoding, etc)?
- how to compute a "confidence" score so that we actually take the generative (GPT) part out of the equation and bail out early, if the documents do no contain anything which strongly implies an answer to your specific query
These are just a few of the pieces. But what we learned quickly is that solving the problem of building a great Q&A means first solving many problems that are deeply intertwined with search algorithms, natural language understanding, and other text problems in general.
Thanks for all this, most of which flew right over my head!!
I wonder, can you recommend a resource where one could get "quickly" up to speed on how this stuff conceptually works, something one could ingest in a weekend or so to get a decent handle on it (but not necessarily be capable of doing anything)?
Like, I think I have a decent understanding of the probabilistic "next word" aspect of LLM text generation, but I'm assuming there's an initial "vector query" that happens prior to this part of it?
Also, could you offer any advice on the notion of how AI/ML is able to extract various meanings from text? For example, let's say I wanted to be able to scrape this thread and then for each comment extract various ideas/topics mentioned in each comment (ideally with influence from a custom ontology), detect emotions, claims of fact, etc etc etc - is that sort of thing possible?
I was thinking of something along this way a while ago: "what if we had an AI that we didn't merely use to obtain answers but rather to ask intelligent questions, so that it would help to build self criticism when developing a product, service, piece of code, etc?
It would still need human to develop enough data to have an AI to come with something meaningful.
If were to ask an AI model how to structure a good e-commerce when no previous data on e-commerce was available probably you'd get nothing meaningful.
Or ask how to do a human centric design when the web was all about animated gifs and flashy buttons. Probably you'd never get minimalist design as an answer.
The current chatbots are kind of like a calculator for language—they’re not going to do all your work for you, but they’ll definitely speed and streamline the process.
What kind of basic math are we talking about here? Arithmetic? Calculus? Group theory?
What makes those "basic maths" a good benchmark for whether we're fucked?
Freeing up head space for more important things is valuable. Not having to maintain 7 digit numbers in your head lets you focus on the actual problem.
Not having to worry so much about the phrasing in a document because anyone can ask a bot to rephrase it, in a way that makes the most sense to them, is better than what we have today.
Also, was the slightly-below-average person actually good at basic mental arithmetic before? Or did they just work through simple problems agonisingly slowly or accept the answer the other person gave them even though it was wrong and left them shortchanged?
> What makes those "basic maths" a good benchmark for whether we're fucked?
They weren't saying basic maths was a good benchmark for whether we're fucked, they said that if GPT does for every other skill what calculators did for basic maths — viz. make us forget all the fundamentals and rely on computers for everything — we're fucked.
My personal opinion is that sometimes our head space is maybe a little too freed up by tool-assisted thinking, and we lose the ability to focus and concentrate.
You don't _need_ to be good at anything really, but we live in much better societies when people are good at things. The fact that we can live like mindless automats doesn't mean it's something we should strive for.
I wonder if this article is written also with AI help? For instance ChatGPT usually ends the last paragraph with "Ultimately". As a side note I've also seen sites use SEO by writing possible search terms into the article itself.
I'm increasingly frustrated by the amount of people willing to give up control of their own thought processing. At what point do you decide, I'm good enough and don't need to improve, I'll make the AI do it for me!
Before getting frustrated, it seems like we need to have a serious conversation about what we believe is/is not appropriate to delegate to a computer system. We need to be looking at why people are so willing to give up certain categories of thought processing.
Do you feel the same frustration regarding handheld calculators? Why or why not? The utility of reducing math mistakes and improving accuracy across an entire population seems clear.
And yet the ubiquity of calculators has not removed the need to learn the fundamentals of mathematics. I suspect the same will emerge with AI tools.
The primary difference is that it seems possible to get better at writing by working with an AI while a calculator is just a black box that spits out a pre-determinable answer.
A calculator is arguably an overly simplistic analogy, and nuance abounds once you bring AI into the mix, but I don’t think the tech community has done a good job of distinguishing between AI-specific concerns and garden variety problems that seem common to most modern technology, especially the kind that automates something that historically required a human in the loop.
It is a risk to outsource critical thinking, because a capacity for critical thinking is essential for freedom of thought, which is essential for free society in general.
Delegating creativity is less likely to lead to outright disaster, but allowing your creativity to atrophy sounds like a recipe for a terminally boring society.
I agree that outsourcing critical thinking is an issue, but I’m not sure that is an automatic outcome of using a tool like this.
The flip side is that some people just struggle with writing. I’ve worked with many of them. Smart/capable people, for whom the process of writing is an impediment to clear thought, with many mental cycles spent on the act of writing itself and remembering the rules of punctuation and spelling to the detriment of the idea they’re trying to convey.
They feel about AI writing tools the way I feel about generative AI image tools: excited that I can get better at visually conveying ideas that I don’t have the artistic ability to draw on my own.
> but allowing your creativity to atrophy sounds like a recipe for a terminally boring society
This is a binary outlook on the impact of these tools and I’d pose these followup questions:
- Is a creative person who is driven to do creative things going to lose that drive, or use AI tools as a force multiplier?
- Is there a reason to believe that the act of writing itself is always a creative endeavor and not just an outward/visual manifestation of thought/and ideas? (Note: I’m not claiming that writing is never creative, or that creativity is never involved in the organization of the words themselves, but this is distinct from the underlying ideas themselves, and not always the most important factor).
- I don’t think anyone would argue that a thesaurus or dictionary, or tools like spell check/autocorrect risk stripping someone of their creativity, but what would it look like to define the line between creative thought and expression of those thoughts?
- Most people don’t “create” today, mostly because it’s hard work. Argument about “boring content that lacks creativity” aside, can we be so sure that lowering the barrier to entry atrophies creativity instead of opening new avenues for it to thrive?
An argument as old and frankly as tired as time itself:
Plato on reading and writing: "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them."
When the machines manage to supplant all the mechanistic portions of our thinking, what will remain? The creative bits, is my guess. There will be far more room for the creative bits, and then... ???
Often the creative bits come out of the struggle to do the work. If you get rid of the struggle, then you also get rid of the motivation for their creation.
By creative bits, you are supplying an human-based objective function. I like this book "Why Greatness Cannot Be Planned: The Myth of the Objective" which argues for the existence of the "objective paradox", a paradox which states that "soon as you create an objective, you ruin your ability to reach it". Maybe we shouldn't have any objective as such and let fun and play will be the norm of discovery. Sort of like know the rules and then break them in creative ways.
I had a conversation with ChatGPT yesterday where I came up with the first few episodes for a new season of 24. I provided a set of events that happen in episode 1, 2, 3 and 4 and it "rewrote" it to sound better. Which it basically did. Then I let it "provide a compelling episode 5". here is is:
In episode 5 of the new season of 24, Jack Bauer's mission to infiltrate and take down the terrorist group leads him to a critical moment of decision. With the help of his team and his contact in Moscow, Jack manages to uncover the group's plans for a major attack on the city and learns that the group has a mole in the White House. Jack must now decide whether to reveal this information to his team and risk blowing his cover, or keep it to himself and try to take down the group alone.
Meanwhile, the White House and CTU are in a state of high alert as they race to track down the mole and stop the terrorist attack. The Vice President and acting President is under pressure to handle the crisis and maintain the illusion of Jack's kidnapping while trying to find a way to bring him home safely.
As the clock ticks down, Jack must make a difficult decision that will have serious consequences. He has to choose between his loyalty to his country or his loyalty to his team. The episode is filled with intense action and nail-biting suspense as Jack and his team race against the clock to stop the terrorist attack and clear his name before it's too late. The episode also features a dramatic twist that keeps the audience guessing until the very end.
OK, but but actually it quickly devolved into generic description of many episodes of 24. The human creative element can't be captured by GPT yet.
By the way this totally should happen: Jack is president, has to fake his kidnapping because of a plot coming out of russia, meanwhile apparently the mole is the first lady.
This is an extremely insightful article. Strong agree with the core points made that these are copilots or co-processors rather than replacements. Maybe at some point in the future we will all be replaced (sound familiar?) but for now these are tools that can help us make progress as a society at a much faster rate
AI will take a while to wash over the masses while Byung Chul Han sees no improvement from an aphorism assistant since he already thinks in aphorisms organically.
I've benefitted since I started using GPT in my own idiosyncratic process last summer since it identified and fixed my problem of writing sideways, i.e. breadth first.
I could get this from a good, old fashioned human education but my professor friends in the humanities have already resigned tenured positions.
I worry that similar to HR becoming human robots the long term impact is that we trade anthropomorphism for robotomorphism.
I do sometimes wonder to what extent and at which point should we start questioning the reality of the discourse happening here in HN.
However, in my work, one of the things I’m tasked with doing is the tedious process of writing and rewriting instructions for training exercises. A non-trivial amount of unnecessary energy is spent on restructuring the grammatical structure and flow of this text.
I seriously wouldn’t mind focusing on the details and let an AI do the work.
This is my first time hearing about Slite knowledge base. I think this is a great idea based on search summaries. However, the access privilege bit covered in the video seems technically tricky. How do you know that chatgpt isn’t leaking out some private information in its responses, unless every user has their own private model?
We built the technology in house and don’t rely on chatgpt, but GPT alone, and services offer it in a similar way algolia or elastic offer search for instance, it keeps data private.
Rather than trying to one-up the last great breather (writer), imo it's enough to breathe (write) because it's an enjoyable pursuit that sparks your imagination and connects you to other humans.
It's not a complete disengagement. Your brain is a processor, just like an NLP is. It takes in various inputs and rearranges them and makes connections. Taking in the input of your knowledge base as you write is not about replacing your thinking, it's about jogging your memory, stirring up your thoughts. It's like working with an editor, or pair-programming, or even just taking a walk to let things settle.
Unlike an LLM, the brain gets better at things the more it does them, and worse the less it does them. Whatever skills we replace with NLP, we should expect to atrophy in our brains. In the case of writing, we may at first assume that those skills are simply constructing a sentence at a syntactic level, or coming up with the right words. In other words, that writing is essentially a rote, mundane task, like spell checking or long division. But we may find in practice that there are underlying analytical, creative, or reasoning skills which writing requires, and that we practice through writing, and not the behavior we replace writing with.
I think AI can make people better thinkers for sure! Look at what AlphaZero did for Chess - it basically showed that humanistic ideas can still thrive at the highest levels of computer chess (think like intentionally hanging a piece, dynamic positional play, trapping opposition pieces, sacrifices that don't pay off for several turns, etc...) rather than brute forcing a position. I'm sure the top players all learned quite a lot about the game through watching AlphaZero, and I'm sure they've all implemented those lessons into their own games.
I don't know.
With Chess, you have an antagonistic relationship.
You are motivated to overcome the AI player.
With this ChatGPT non-sense it is symbiotic and we rely on the results gotten no matter how we got them. Or do you think calculators make you better at computations in your head or a better mathematician?
I think the problem I have with this headline is the tacit meaning of better.
The results might be better yes, but YOU are not different.
I have spent years polishing my English, reading Orwell, Harpers, Lapham, Poe and others. I spent hours arguing about the minute differences of intricate grammatical structures. This is nothing an AI can help me with. It can only be gotten through a teacher and arguing, thinking and brooding over differences in the solitude of ones chambers, NOT by conversing with an AI that always delivers bespoken solutions as tasty morsels lacking the bitterness of labor as the underlying creation of the solution is lost.
Unlike with a teacher, it cannot be gotten from an AI either because the AI is a generator void of the deeper understanding of what is going on.
And the same can be said about thinking.
Writing is nothing but a vociferated thought put on paper.
I have worked with Watson and now working on Ask. I can guarantee that it is much better and more accessible. Slite is uniquely positioned as it has been an opinionated knowledge base from the beginning.
But don't take my word for it; you should sign up for the beta and give it a try :)
Think of the concept I want to write about as well as the supporting evidence for the topic. Ask ChatGPT to write me something in my target format using the topic and supporting evidence as input. What I get back is essentially a well-written skeleton that I can use to fill in additional details. Finally, I pass my revisions through ChatGPT to touch up any errors, rephrase wordy things, etc. I lightly edit the final draft and I usually have an excellent result.