The age of Artificial Intelligence has begun, and so has the downfall of civilization. I'm not the one saying that, but that's what you'll find most dissenters say. But dissenting will get you nowhere.

Whether we like it or not, AI is here, and it will change the entire landscape as we know it. The prospect of losing your job to AI is an intimidating one, but this is not the first time something has come along that threatens to gulp people's livelihoods. All you have to do is take a look at all the advancements throughout history –the Industrial Revolution, Printing Press, Computers, the Internet – and know that it is inevitable. To quote a platitude everyone knows (don't strangle me, please), "Change is the only constant." You have to adapt instead of being bitter about it to avoid becoming irrelevant.

Instead of fearing AI, why not embrace it and use it to help you be better at your job? The truth is that while most people's jobs might not be in danger (at least, right away), there have already been instances of AI replacing people at work. So, without adapting, you run the risk of becoming obsolete. But being dubious about the use of AI is also understandable, especially when it comes to the ethicality of it all.

The ethics surrounding the use of AI in the workplace isn't a straightforward topic. It cuts across a multitude of philosophical dimensions, from questions about the nature and value of human labor to issues of justice, exploitation, fairness, and human rights.

AI and Ethics

On one hand, automating tasks through AI can lead to greater efficiencies, freeing us from repetitive and mundane work. This echoes the Enlightenment ideal of technological progress as a means to liberate us from the "necessities" that constrain human freedom and potential. If handled judiciously, AI could even allow for the redistribution of resources and opportunities, helping us to focus more on creative, intellectual, or interpersonal tasks that we find meaningful.

However, another other side of this polyhedron reveals concerns about job displacement, economic inequality, and the dilution of skilled labor.

And then there's yet another question about relying on AI entirely. Now that's a tricky thing. Not only does it cross the dangerous territory where we're inviting the notion that AI can do all the work for us, but we also risk "deskilling" if we fall into lazy patterns. There's a danger that over-reliance on AI tools could atrophy human skills, much like how the advent of calculators impacted basic arithmetic skills for some.

One thing is certain, while integrating AI into the workplace, the priority should be human welfare, rights, and values. Espeically, if you're an employer, you need to deploy AI at work keeping ethical practices in mind. Your only concern cannot be the bottomline.

Plagiarism and AI

One of the most prominent sides of this complex polyhedron that AI ethics forms is the question of originality and ownership. Can you even ethically use AI without plagiarising?

The most sophisticated AI models today can whip up "original" text, art, poetry, and even music within seconds. But is it really original? The output of an AI may appear new, but they have synthesized it by essentially recombining elements from pre-existing materials they encountered during their training, i.e., material created by humans.

Take, for instance, the case of AI-generated art or music. If the AI's output closely resembles a human artist's work, it raises ethical and legal questions about originality and intellectual property rights.

Most AI creators argue that since the resulting output isn't intentionally designed to copy or steal (the machine has no such intentions), it shouldn't be considered plagiarism. Of course, they would argue that.

But what about normal people? And what about the artists whose art has been "used" (without their consent) to train these models? The artists who are no longer around don't care. But these AI models can also mimic art from artists who are very much around.

So, the intentionality of the machine can't be the only lens through which we examine plagiarism or intellectual property theft; the intentionality of creators should be more important here.

The ethical landscape here is fraught with both promise and peril. There needs to be a balance. And you need to tread carefully.

And while it's a very philosophical debate where we're still finding our footing, it won't help with your short-term goal – using AI at work to increase productivity while also remaining within ethical boundaries. And that's not the goal of this guide either. While philosophical discussions around AI and ethics can go on and on, you came here looking for a "how". So, let's see some action points that can help you use AI ethically in your workflow.

1. Educate yourself about AI

When using an AI, it's really easy to fall into the pitfall of shortcuts. But before you want to use it for work, you need to educate yourself about it. That includes everything, from its capabilities to its shortcomings. Without knowing the entire scope of both, there's really no way to use it ethically. Especially when it comes to the AI's shortcomings and biases.

Using AI ethically at work should involve leveraging artificial intelligence in ways that respect principles of fairness, transparency, accountability, and inclusivity while avoiding harmful consequences or biases. However, there have been many cases of discrimination by AI that should make one cautious.

AI (as we know it currently) is based on algorithms and data, and machine learning models can perpetuate societal biases present in their training data. So, depending on these algorithms and datasets that it is trained on, it can discriminate against certain genders, races, ages, etc.

For example, consider the investigation being held against Optum for their algorithm that allegedly recommended more attention from doctors and nurses to white patients than sicker black patients. Or the case against Goldman Sachs for using an AI algorithm that allegedly discriminated against women and granted larger credits to men on their cards. Of course, there's no saying if the bias was intentional or not (in the algorithm), but the fact remains. AI is capable of such biases. These are real-world use cases where the results of AI can have a significant impact on people's lives and not just a chatbot where the bias might only result in hurt sentiments.

So, even if you're planning to use AI in an environment where you're only concerned with efficiency, accuracy, or some other metric and not originality, such as data analysis, or diagnostics in healthcare, you need to know the biases of the AI. In short, it falls on your shoulders to make sure that the AI you are using can provide fair and equal results. You need to understand where the data in its dataset comes from, how it's been collected, and whether it accurately represents the community you're serving.

2. Be Transparent and Upfront

If you're using AI to assist you with work, you should be completely honest and transparent about it. The fact about AI tools is that most of them collect data from you as you use them.

Your organization and stakeholders need to know if you're using an AI (if the organization isn't the one to have deployed the tool officially) that might be getting access to any confidential client data. And if you're a manager at your organization, considering deploying AI to your teams, the same set of rules still apply.

In an age where data breaches and misuse can tarnish a company's reputation overnight, ethical AI usage safeguards against potential PR disasters. No one needs another scandal like Facebook and Cambridge Analytica on their hands.

Furthermore, anonymize the data where possible to protect individual identities as well as obtain informed consent from individuals (customers or clients) before feeding their data to an AI system.

Even in fields where you aren't collecting data, in journalism, for instance, transparency is still crucial. If AI is used to generate content, this should be clearly disclosed to the readers. You can even disclose the AI algorithms that you have used for your work. This not only aids in maintaining trust but also allows for the scrutiny and critique that are essential to the journalistic process.

3. Always Fact Check the Data

AI is notorious for hallucinating and presenting false information as facts. It can also generally get some information wrong. So, before using any results from the AI for your work, it's your responsibility to verify it. This is as much of a practical requirement as it is ethical.

4. Don't Become Lazy

As I mentioned before, too much reliance on AI can atrophy your skills. Consider writing, one of the latest skills that has been artificial intelligence-d. Plenty of sophisticated AI models are popping up that write better than most humans. But should that mean that the work should be left up to AI entirely?

In doing so, we might be moving towards a future where most humans won't be able to string words and paragraphs on their own. Writing robots could become as ubiquitous as typewriters once were, where every person in a white-collar job is equipped with an AI assistant. However, this opens up a Pandora's Box of ethical concerns. If AI systems take over the writing process, we risk a kind of "deskilling," where the human role diminishes to merely editing or fine-tuning what the AI generates.

But is writing even writing without original thought? Even the most sophisticated AI models cannot create in the way we do. And if we delegate all the writing to the AI, how will we ever come up with another original thought or insight?

The ethical employee, in this context, needs to strive to maintain a balance where AI serves as a supplement rather than a substitute for human creativity and expertise.

5. Always Retain the Human Touch

The challenges of using AI ethically in fields where originality and authenticity are highly valued are considerable but not insurmountable. The simplest way I'd put it is that you should always retain the human touch. Without the human touch, the outputs that the AI produces are rather robotic and miss any glimpse of creativity.

Especially in academia, journalism, or any field where the authenticity and originality of thought are paramount, the use of AI could be seen as a form of intellectual shortcut, potentially crossing the line into plagiarism or at least complicating the issue considerably.

In such situations, it's important to use AI only as an assistive tool rather than letting it do all the work. AI is not a replacement for human thought and creativity.

For example, in academic research, you can use AI to help define your hypothesis, sift through vast amounts of literature, identify patterns or anomalies, and even find possible avenues for further investigation. However, the behemoth of work, like the interpretation of results, major writing of the thesis and the drawing of conclusions should remain a distinctly human task. This division of labor will allow you to ethically leverage AI's strengths while preserving the elements of originality and intellectual rigor in your work that only humans can provide.

How to Use ChatGPT for Research
With ChatGPT as your research sidekick, you’re never alone in the scholarly trenches.

This was just one example. No matter which field you're using AI in, constant oversight and auditability over AI is important. You should regularly review the AI's outputs for quality, accuracy, and even ethical soundness.

6. Keep Up with the Rules & Regulations

AI is a relatively new space which is still being regulated. So, it's important to stay on top of the latest regulations instead of feigning ignorance when it comes down to it.

AI offers an exhilarating ride toward increased efficiency, but it's not without its potholes. As we blend technology and humanity, ethics can’t be an afterthought. It's a complicated subject and needs to be handled with grace and consideration.