Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Companies banning the use of ChatGPT level tools going forward will find the rules either flouted, subverted or the employees going elsewhere.

Of course there is a duty on employees to be professional - the latter will be the ones taking up opportunities at non-legacy/dinosaur corporations that think they can command the waves.

The answer is to sort your processes, security and training out - new AI is here to stay, and managers cannot stop employees using game-changing tools without looking very foolish and incompetent.



Why? It’s a valid concern in my opinion. You’re feeding OpenAI your intellectual property and just hoping they don’t do anything with it. I have the same concerns with Microsoft’s TypeScript playground


It is a valid concern if you send an entire list of confidential data and ask it to transform that list. However if you ask ChatGPT some questions about coding in general it's no different than searching online.


Unless you feed it your proprietary code — something Samsung chip fabrication employees actually did.


This comes down to whether you trust your whole organization to act be educated about these issues and be good at judging what's ok.

At the size of Samsung that's just an impossible move, and it's easier to blanket ban a problematic service and have employee request exceptions justifying their use case.

BTW I've been in companies that blanket ban posting stuff online, and got posts security reviewed when asking help on vendor community forums. That's totally a thing.


It's like WFH/in-office. Some people care and some don't, you're removing the ones who care.

Personally I haven't used GPT much so I wouldn't mind, but banning copilot would make me reconsider.


Yes, 20 lines of transforming jsons from one form to another are exactly what OpenAI employees are looking for in all the data they're gathering. How will my company survive after they get their hands on this?


Good opsec/appsec requires doing things that seem unnecessary. And it depends on the context. Passing a private key or raw text customer password to any type of online tool is never a good idea.


You'd be surprised how much load bearing software could be reduced to 20 lines of data transforms, or less.


It’s not clear whether ChatGPT and the likes would increase productivity at the organization level. And I am talking about the current GPT-4, not some hypothetical AGI. From what I have seen, a large swath of usages are basically just people DDOSing their teams with a lot of words. Things like someone in a marketing team prompting for a “detailed 10-week plan with actual numbers” that naturally have no basis on reality, but will take a lot of effort from their team to decipher the bullshit. Likewise there are also generated hundred lines of code with tests that are subtlety wrong.

Basically the challenge is fairly straightforward, if one side is machine-generated, and the other side is human-validated, the human loses 100% of the time. Either the machine has to be 100% accurate or very close to it, or the human needs tools to help him. As it stands, neither of those conditions is here yet


Today I was asked by a coworker, "why don't you use chatgpt." He had just asked me a question that chatgpt had failed to answer to satisfaction. I was able to find a complete and strongly justified answer within 30 seconds. But I suspect he'll continue using the tool. I'll continue thinking for myself.


All these arguments about how it can be wrong are literally identical to the arguments I heard from ignorant people between 2001-2008 on why search engines were unsafe because Google could return wrong answers or steal your data.

IMO the only people crapping on AI either don’t know how to prompt it correctly, aren’t very creative, or frankly weren’t doing that much work in the first place and feel threatened by it. I understand the need to protect intellectual property but 10 years from now there will be two types of companies: Those that embrace AI, and those that crashed and burned.


There has always been two types of companies: those that are run well and have process to evaluate tools as they fit the business values and needs, and those that are full of grifters, chasing whatever hype is trending at any given moment. Unfortunately, the latter type of companies is far from crashing and burning (or at least not quickly enough).

You are neither arguing in good faith, nor even engaging in my argument at all. Please try not to do that. Your two paragraphs amount to nothing more than "if you are criticizing GPT-like tool, you are an idiot", that's not even an argument.


Unsafe sounds like a parent explaining things to you.

The big area of concern were people using 'doctor google' instead of going to a doctor. Finding a sore throat and jumping down the most negative path.

You could use google to mine certain types of data in that era. Passwords.. now much of that is filtered.

In 10 years those intellectual property rights might give them ownership over AI. The court battles haven't begun yet


Do you have any tips on how to prompt it right? How do I tell it to write code when it isn’t aware of custom internal classes and APIs?


Edit: my comment might seem out of place now, but the comment I was replying to originally had this paragraph:

> I write a lot of code with it and it’s extraordinarily, obviously clear that when prompted right, it increases productivity 2-5x. My company recently banned it and it has been excruciating. Like going back to coding before StackOverflow and Google existed.

I would still appreciate any tips in this regards


Give it the context it needs. If you have internal APIs give them a .proto file or whatever you use to define an API. Phrase the question in a way that forces it to explain its thought process. Use chain of thought reasoning. Find documentation online, give it the documentation, then give it your code, then ask it to do something. Frequently spawn new sessions, if ChatGPT is repeatedly giving bad answers start a new session, modify the context, and ask in a different way. Basically you have to brute force it to get it to give you the answers you want on harder problems.


Got it, will try this out. Thank you!


this is well-said, in the case where correctness is required. But low-trust communications via low-profit activity like tech-support, cheap ads, help lines for government, and even some professional services.. won't care; Employers gain and employees are redundant, next! In those businesses, it is an old story that the cheapest, cruelest and most-scofflaw company wins.. not even debatable that is true.. so here comes MSFT to sell it to you.


Companies banning the use of ChatGPT level tools going forward will find the rules either flouted, subverted or the employees going elsewhere.

Why? Companies typically have many rules that they expect employees to follow. Why would employees disregard these particular rules, or even quit because of them?


Shadow IT.

Companies can have reasonable cause to block things, or require processes for installing software, etc., but when those burdens become too much time or effort employees will find a way around it.

Almost a decade ago, the company I worked for didn't have good wiki software OR a good request system. My team had a linux server for the purpose of some primitive monitoring and automation of our systems. Apache was already installed...

Within a few weeks we operationalized a new dokuwiki installation, and not long after that we built an internal request system based on Bottle.py (since it didn't require any installation, only a single file).

Seeing that GPT-4 is so incredibly useful to the people I've heard talk about it, there will be employees trying to use it to increase their code quality, communications, planning, etc.

My current employer put out guidance specifically stating not to put any proprietary code into it, no matter how small, nor any confidential information of any kind (don't format your internal earnings email with it, for example).

That seems reasonable, and recognizes how hard it will be for employees to go zero-tolerance, especially if they don't have total network control over work endpoints.


Just imagine companies would not allow people to use internet search engines for their daily work.


Sure, but those are actually useful.


Because it quickly became a tool that was so useful that I feel I am doing my job better with than without now that it’s available. Similiar to how I would disregard rules and/or quit if I was not allowed to use my operating system of choice at work or was denied the use of a specific tool integral to doing my job (well).


Ah, so you are not disregarding this particular rule, but you disregard all rules that you feel impede you. That answers my question, thank you -- this has nothing in particular to do with GPT tools.


No. I follow many rules, even if I like them or not. I picked one simple example of a rule I would consider a dealbreaker for my employment to illustrate how many people have already started using ChatGPT so heavily in their workflows that they would consider it detrimental if their employer took it away from them. But yes, it might not really have anything to do with GPT specifically besides it apparently being a very useful tool for a lot of employees, given that some would at least claim to quit their jobs over being prevented from using it.


Thank you again. I did not mean to claim that you disregard literally all rules, and I apologize for coming across that way.

I find your explanation sound and reasonable. There are many rules that, even if not necessarily liked, are not sufficient grounds to do anything about. But sometimes rules may impede your workflow so much that you find it preferable to either quietly work around the rule, or even to quit.


I appreciate your apology as it sort of came across that way.

I'll give you a real-life anecdotal example to expand a little on my point. My buddy is a front-end developer for a company which produce pretty basic "stuff" (sorry, I don't know anything about front-end) according to him. He says that he's gotten lazy and unmotivated to do anything about it. This leaves him unchallenged and he doesn't really like his job. Once GPT arrived, he's been able to (according to himself) reduce 70 % of the boring boiler-plate code type work he has been doing for years, by making GPT write it for him, and him just verifying it works. This has ultimately allowed him not only to focus on taking on more interesting projects where he can challenge himself, but also spending a lot of the time he previously spent writing "bullshit boiler-plate code" in learning new and more challenging front-end things.

I can easily imagine people in other jobs, in IT or perhaps in other fields already using GPT to reduce the boring parts of their jobs. I can genuinly not recall having heard anyone say a new IDE or any other tool since the arrival of the computer itself reduce their "boring work" load this signifcantly. So I think at this point it is reasonable to assume that access to GPT will become considered as commonplace as having access to a computer or email (given you work in a field where those are considered basic/primary tools of course), and that employers will have to adapt. If not, people will disregard rules / go "shadow IT" or even consider quitting.


Perhaps the fact that I work exclusively on the front-end is why I also derive tremendous value from GPT-4, and I have been perplexed by others saying they find no value in GPT-4 for coding. There is so much boilerplate BS that GPT-4 just nails down and lets me move on to bigger things.

Just yesterday, I needed to mock up a quick prototype for a new feature we're developing. I just paste in my existing React component (and it's all front-end code with no sensitive/proprietary information), tell GPT-4 what I want it to do, and it does it.

Is it perfect? No. Does it sometimes get things wrong? Yes. But it's still easier and faster to help guide GPT-4 and tweak its final output than to have done it all myself.

I'll never go back. Never.


boilerplate is a sign of shitty abstractions and libraries

and generating boilerplate is the road to madness


What can I say? It's a dirty job. :)


If your job has a lot of boilerplate this is helpful. If your job is finding/solving complex system wide bugs this doesn't help as much.


Don’t you have your own templates?


A fascinating example story! I think that does make it all make more sense why one would quit over not being able to use ChatGPT.


Arrogant software devs thinking this labour shortage of code monkeys will continue in perpetuity


What labor shortage? It's an employer's market right now, with a glut of high-quality candidates looking for jobs.


simple reason. I can refactor the codebase in a dozen different ways in a matter of seconds and choose the best one to work on. I can a summon a large volume of unit tests, descriptive logging statements etc. I can also just dump the logs and it will 9/10 times tell you right away what the issue is and how you can resolve it.

i.e basically you can do a lot of work in just a matter of hours. Once you taste the productivity increase by integrating AI into your workflow you will miss it if it is taken out.

not mention you can build all the handy little tools in a matter of seconds that will make your daily life way easier.


Employees have been prevented (by rule) from doing things that would make them more productive for a long time. For a trivial example, programmers who are proficient in Emacs not being allowed to install and use Emacs.

I still fail to see why employees will now choose to disregard this particular rule, and either disobey or quit.


> For a trivial example, programmers who are proficient in Emacs not being allowed to install and use Emacs.

I'm not sure how you mean trivial here, but not being able to use emacs isn't a trivial matter to emacs users :)

> I still fail to see why employees will now choose to disregard this particular rule, and either disobey or quit.

I fail to see why employees follow rules that make no sense rather than disobeying or quitting.


Honestly, "no Emacs" is one rule that would make me quit.


So would I. Funnily enough I thought it might happen at a job in the past so I've thought through the "would you quit if work banned Emacs" question quite a bit.


They actually can. But not every business is competent. It is completely irresponsible to query public services with internal company intellectual property or any type of information that may breach your contract that you signed with a responsible and competent business. It is trivial to track you do on business equipment and if you think you're smart by subverting that by querying public methods through other means, that can and will be solve soon. Someone is going to make a lot of money deploying an AI system that can retroactively track these things and when that happens, for better or worse, hope you were not irresponsible and worked for a business with the technical acumen to trace you. It's not a matter of if it can be done, with today's tech. It's a matter of whether the business you worked for has the means and willpower to do it.

My advice is follow best practice and wait for an official company policy detailing the use of these new services. Otherwise you may find yourself in legal trouble years from now when the traces you left can easily be uncovered by technology that does not forget.


Samsung is manufacturing real high-tech things, lots of them. GPT just launders things that other people have created. It isn't high-tech.

What would impact the world economy more? OpenAI disappearing (no one would notice) or Samsung disappearing?


>employees going elsewhere

That's strange to me. I'm employed in order to receive a paycheck. If receiving my paycheck is contingent on me not using ChatGPT, then so be it, what do I care?


Employees want to work with good tools and do interesting work. I'm at a small startup and get to spend a lot of my time working with AI - figuring out how we can use it internally, working out where we can integrate it into our product and using it myself to prototype things and automate some of our business processes.

I am hugely fascinated and impressed by AI, and the fact that my work is paying me to spend time using this awesome tool in a real world context is suuuuuuuper good for my job satisfaction.


Because many people will get away with it. Their paycheck is not contingent on not using ChatGPT, because their employer won't find out.

Some people want their work to be high quality and/or done quicker. If there are tools to facilitate that, some people will be interested.


Depends. If suddenly my company said I was going to work on some deadend legacy crap for the next 5 years, I'm going to nope out ASAP.

If you get fired/quit and any other job you're looking at is going to have you interacting with new languages or AI workflows or something like that you have to assess what value you're losing by working for that company and the risks associated with it.


> I'm employed in order to receive a paycheck. If receiving my paycheck is contingent on me not using ChatGPT, then so be it, what do I care?

I can be employed to receive a paycheck by employers that give me freedom or employers that take away useful tools.

Why do I stick with the employer that gives me less freedom? Not to mention, getting a new job almost always drastically increases the size of said paycheck.


Pretty much the thought-devoid "It's new, so it must be good!" argument people have been pushing for centuries, whether it's music or technology or politics or fashion.

Companies banning the use of ChatGPT level tools going forward will find the rules either flouted, subverted or the employees going elsewhere.

If my employees are leaking company information through ChatGPT, I'm happy to have them go work for my competitors and leak their information, instead.


Pretty much the thought-devoid "It's new, so it must be good!" argument

If you think people are just hyping ChatGPT because its new without further reflection, you have stunningly missed the moment and have a rude awakening coming.


I was with you until you said employees going elsewhere. There will be a group who needs to use it because they can't function without some AI assisted help but people working at the company know how to do their jobs already why leave?


Do you think regular employees care all that much about ChatGPT? It's a cute toy. If my employer says not to use it for work data, that's just not a big deal for me.


Lol, going where exactly? Every big company has blocked or restricted ChatGPT from the day it became available.


Nobody cares about ChatGPT where I work. Nobody is going to quit.

The only use for AI is for writing code and the company created a policy around that.


Or the company will pay for a similar tool that doesn't share the data externally.


There is no similarly capable tool available presently.

I agree that when it becomes an option, hosted or securely tunable solutions will be preferred in some cost/risk calculations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: