enEXPERTS IN BUSINESS EDUCATION

Why study if you can use Artificial Intelligence (AI)?

Blog > Why study if you can use Artificial Intelligence (AI)?
AI article by Dario Silic
Article done by prof of SSBM Geneva Dario Silic, PhD and by prof of SSBM Geneva Mario Silic, PhD

Why study if you can use Artificial Intelligence (AI)? Do you still need education when Artificial Intelligence (AI) offers alternative paths to knowledge? Shifting Paradigms: Navigating Education’s (R)Evolution in the AI Era.

Unlocking the Future: 10 Compelling Reasons to Embrace AI as an Alternative to Traditional Schooling

In the ever-evolving landscape of education, the integration of artificial intelligence (AI) has become increasingly prevalent. Nonetheless, the pertinent question arises: does it genuinely substitute the inherent value derived from human-centric study? Should you go to school, university or college when accessible AI resources on the web are readily available at no cost? Why study and lose your time if the AI can solve most of your problems and give you the necessary knowledge in all various areas or fields of your interest whenever you want ?

Navigating Education in the Age of AI: Do We Still Need School and College? AI vs. Academia: Debunking the Need for Traditional Education in the AI Era? Degrees or Algorithms: Reassessing the Value of Education in an AI-Driven World? The Great Debate: Education in the AI Epoch – Do We Need It Anymore?

Those are some of the many questions raised in this article which explores the symbiotic relationship between human intellect and AI, highlighting why studying in an era of advanced technological assistance and existence of “free” AI.

Some people say that AI can enhance learning experiences by providing personalized recommendations, adapting to individual learning styles and offering real-time feedback. However, many AI users say that it lacks the depth of human understanding, emotional intelligence, and the capacity for creativity that comes from genuine human learning. Engaging in conversations with students enrolled in SSBM’s bachelor, master, and doctorate programs reveals a consensus: the educational experience at SSBM Geneva empowers them to comprehend intricate concepts, foster critical thinking, and participate in analytical reasoning—proficiencies that transcend the capacities of artificial intelligence.

Moreover, SSBM Geneva students confirm that the act of studying fosters discipline, perseverance and a sense of accomplishment. It cultivates a deep understanding of subjects, allowing students to apply knowledge creatively and adapt to novel challenges. Being experienced professors in fields of Financial Management and IT, we can without any doubts confirm that while AI may excel at processing vast amounts of data, human cognition is indispensable for making meaningful connections, generating innovative ideas, and solving complex problems.

So why to study or to go to school if you can use Artificial Intelligence (AI) for free?

Let’s take a closer look on 10 main reasons to use AI instead of going to school and you will be in position at the end to conclude whether it is still justified to study or not if you have AI available nearly for “free” where you can get the same or even better knowledge than what you would receive in the traditional schools.

AI in education

10 main reasons to use AI instead of going to school

AI helps to get quickly information, article, analysis, presentation or other content automatically in few seconds. You do not need to wait for professors to explain in class different topics on lectures. At a first glance it seems more interesting than sitting few hours per lecture in classroom waiting for professor to finish the lecture (online education can be a solution, see our online programs at www.ssbm.ch), which can be even worse if the professor is not at all experienced, interesting or explaining information with subpar presentation skills. On the other side, depending solely on AI for learning neglects the development of essential human skills, such as critical thinking and creativity.

Let us demonstrate this using a concrete real life example:

Scenario: Designing an Eco-Friendly Transportation System

Critical Thinking

Imagine a city facing significant traffic congestion and high levels of pollution due to the conventional transportation methods. Someone with strong critical thinking skills would analyze the existing problems, considering factors such as traffic patterns, environmental impact, and public transportation usage.

They might evaluate data on peak traffic hours, identify bottlenecks, and assess the environmental consequences of current transportation systems. Critical thinking would involve understanding the root causes of the issues, such as outdated infrastructure or inefficient public transit routes.

Creativity

Now, the individual must come up with a creative solution. Instead of proposing the construction of more roads or expanding traditional public transportation, they might think outside the box. For example, they could suggest a system of elevated bike lanes or pedestrian-friendly zones in the city center.

To encourage public engagement, they might propose a rewards system for using eco-friendly transportation or integrating sustainable energy sources into the transportation infrastructure. This creative approach involves thinking beyond conventional solutions and considering innovative ways to address the city’s transportation challenge

Many other examples could also be used to demonstrate this. This includes financial statement analysis based on any balance sheet or P&L statement used in finance courses that we teach such as Financial Statement Analysis or Financial Management where students are supposed to understand Cash flows, project it in future, apply techniques of measuring investment profitability, measure the risk or to do the scenarios or sensitivity analysis with a goal to maximize the company value and to make it profitable on long term basis. AI can not do this as AI will just provide you with definitions and basic calculations without having a possibility of detecting the operational, finance or investments risks and problems, without proposing the costs optimizations or revenues maximizations, without the possibility to propose optimal capital structure, strategies of long term financing, measurement of risks and simply shareholders value maximization. A good professor, especially with corporate finance experience, will do be able to do this in a very simple way by explaining the information to the students. The professors do this through, logic, critical thinking and rational analysis or interpretation of identified risks with horizontal or vertical analysis, detected problems and will then propose adequate possible solutions to such identified structural, operational, finance or investment problems in any company.

To not be theoretical we can give you an illustrative example based on Robert Kiyosaki videos on TikTok : https://vm.tiktok.com/ZGeFyPbRf/

By listening to Mr. Kiyosaki, you have a feeling that you can become rich overnight by using bank loans which always increases the value of your company and that debt represents tax and risk free source of financing. In reality, this is absolutely false for many reasons. Robert Kiyosaki or many other managers inspired by Keynes see only one good side of the story without giving you the facts about the risks of debt and this sends a wrong message to people who can easily conclude that the use of debt is without any risk and that you always create additional value from debt and that at the end you can easily become rich. Someone with appropriate financial education in school should be aware of many challenges and risks when using debt. Firstly, not anyone can raise debt. Banks provide loans to borrowers based on their risks, collaterals, and guaranties to guarantee the risk of default in loan repayment. As a consequence, the cost of financing and duration of loans will reflect such credit risks for lenders. In simple terms, the stronger your balance sheet and profit and loss statement with good track records, the better lending conditions you will receive. Robert Kiyosaki, like Donald Trump and many other businessmen, considers that you can become rich by investing in real estate, which they believe will continue to grow. By using debt they avoid paying taxes and they pay all their costs, even private ones through their companies. From a tax perspective, any long term financing increases your cash position and creates equity or liability in your balance sheet, so there will be no direct impact on profit tax. However, the use of debt or equity will finance assets which in their turn will create revenues which are then subject to profit tax.

However, they do not speak what happened with their investments in case of correction of prices on real estate markets that happened during the last financial crisis in 2008 when we there was a huge real estate collapse in USA which impacted everyone with prices collapsing and where many lost billions without repaying loans to banks.

Due to this many banks or financial institutions such as Lehman Brothers, the American International Group, Merrill Lynch, Bear Sterns and many others ended up in liquidation or pre-liquidation processes. They also do not speak about the fact that banks own your assets until you repay debt. That you need also equity to get bank loans (gearing), that you have to pay taxes for paying private expenses by your companies (payments in kind), that you have operating risks (measurement by standard deviation or variance) and costs when you rent assets, that you pay taxes on such revenues and many other risks or costs that you have to consider when investing in any asset. These topics mentioned, one can learn in school and not on web, TikTok or on AI platforms such as Chat GPT. In contrary to Robert Kiyosaki, Warren buffet will tell you to go to school and learn the risk measurement, valuation of companies and many other corporate finance topics that you need to know before investing in any asset.

https://www.youtube.com/watch?v=63oF8BOMMB8

In school, we teach students that the use of debt is good when you want to reduce your cost of financing and reach optimal capital structure but with target to reach that your IRR >WACC, if not, do not use debt or any other source of financing.

By combining critical thinking to understand current issues and creativity to propose unconventional solutions, the student when studying in school demonstrates a holistic approach to problem-solving that goes beyond what artificial intelligence or social platforms might currently offer. 

As a conclusion, overreliance on automation that AI offers creates problems related to creativity and critical thinking, the two important aspects learned at school.

2. Emotional Intelligence

AI lacks the ability to understand and respond to human emotions, missing a crucial aspect of the learning experience.

An example of a lack of emotional intelligence when using AI can be found in certain chatbots or virtual assistants that fail to adequately understand or respond to human emotions. While AI systems have made significant progress in natural language processing, they may struggle to interpret the emotional nuances of human communication.

AI emotional intelligence

For instance, imagine a scenario where a person is interacting with a customer support chatbot after experiencing a problem with a product or service. The individual expresses frustration and dissatisfaction in their messages. A chatbot lacking emotional intelligence might respond with generic, pre-programmed answers that do not address the customer’s emotional state. It may provide solutions in a tone-deaf manner, ignoring or misinterpreting the person’s frustration.

In contrast, a more emotionally intelligent AI would be designed to recognize and respond to the user’s emotions appropriately. It might acknowledge the customer’s frustration, express empathy and offer a solution in a more understanding and compassionate way. Integrating emotional intelligence into AI systems is crucial for creating more effective and human-like interactions, especially in customer service or support scenarios where emotions often play a significant role.

Another illustrative example is the use of the chatbot introduced on www.ssbm.ch where a potential student asked if he could come to study the MBA Program in Geneva with his wife. AI answered him that it is not instructed to provide an answer in such situations. Any human would answer to such a question without any problem that obviously you can study with anyone you want without any problem and that you would even eventually get a discount if both would enroll. Emotions and feelings are part of normal human behavior and actions whilst AI is still facing difficulties to respond to such human expectations.

3. Adaptability

While AI can adapt to certain learning styles, it may struggle to understand the nuanced and unique requirements of each individual.

AI adaptability

An example of limited adaptability in AI can be observed in systems that struggle to adjust to new or unforeseen situations, particularly when faced with data or contexts outside their initial training scope. AI models are trained on specific datasets, and their ability to adapt to novel scenarios can be constrained by the limitations of that training data.

Imagine a voice recognition system designed to transcribe spoken words accurately. This system is trained on a dataset that primarily consists of clear, well-enunciated speech in a particular language. However, when faced with a user who has a strong accent, speaks rapidly, or uses colloquial language not well-represented in the training data, the system may exhibit limited adaptability.

In this scenario, the AI’s performance may degrade, resulting in inaccurate transcriptions or misunderstandings. The lack of adaptability becomes evident when the system encounters variations that were not adequately covered during its training phase.

Developing AI with broader adaptability requires exposing the system to a more diverse range of scenarios and data during training. 

Additionally, incorporating techniques like transfer learning, continual learning, or reinforcement learning can help enhance adaptability by allowing the AI to learn and adjust to new information and circumstances over time.

To give you an illustrative example, we can use the case of adaptability of AI in case of different individuals or groups based on cultural, political, intellectual or other differences or backgrounds. For example, when we have live lectures on our campuses in Geneva, Paris, Vietnam, India, Cameroon or Myanmar, there are always part of students interested more in explanations of the real life concrete corporate examples provided by the real live professor. Others would like to listen about theories, and some of them would like to do mainly case studies. We, as professors, have then to find a balance, to satisfy all students and to find a way to motivate all of them during this limited learning period ahead of us, and transfer all predefined learning goals and learning outcomes using the best teaching methods. A good professor should be simple, fun, corporate oriented, concrete and professional. AI cannot simply combine all of these qualities and spend some time with different individuals and group profiles to address their expectations. In simple words, adaptability is still a huge concern when using AI.

4. Inability to Foster Social Skills

Social skills

Studying, often involves collaborative efforts, promoting social skills that AI cannot (still) replicate.

An example of the inability to foster social skills using AI can be seen in virtual assistants or chatbots that struggle to engage in natural, contextually relevant conversations. While these AI systems may excel in specific tasks or answer direct queries, they often lack the nuanced understanding and social awareness required for more intricate social interactions.

Consider a virtual assistant designed for social conversation. Despite having a vast knowledge base and language capabilities, it may fail to grasp the subtleties of human communication, such as humor, sarcasm, or changes in tone. Additionally, these systems may struggle to pick up on non-verbal cues, gestures, or emotions expressed through facial expressions or voice intonations.

In a social setting, an AI with limited social skills might respond inappropriately or seem detached when faced with expressions of empathy, joy or sorrow. It may not understand the importance of small talk, cultural nuances or the appropriate timing for certain comments, leading to interactions that feel robotic or awkward.

Enhancing AI’s ability to foster social skills involves a more profound understanding of human behavior, emotions, and social dynamics.

 In school professors are supposed to have emotional intelligence, social awareness and social skills. Professors in school should have a deeper comprehension of social context to enable more seamless and empathetic interactions with students. With AI you have the inability to foster social skills that every parent would like for its child to acquire when going in school as social skills are a condition precedent for any professional or intellectual development.

5. Risk of Dependence

dependenceRelying on AI for all learning needs may create a dependency, hindering personal growth and self-sufficiency.

An example of the risk of dependence on AI can be seen in the increasing reliance on automated decision-making systems, particularly in critical areas such as finance, healthcare, or autonomous vehicles. As AI systems become more sophisticated, there is a potential danger that humans may excessively depend on them without maintaining a sufficient level of oversight or understanding of the underlying processes.

For instance, consider a financial institution that heavily relies on AI algorithms to make investment decisions. If the AI system encounters an unprecedented situation or experiences a technical glitch that the developers did not anticipate, there is a risk that it could make incorrect or suboptimal decisions. If human oversight is lacking, and individuals overly trust the AI system, it might lead to significant financial losses.

Similarly, in the healthcare sector, the use of AI for diagnostic purposes is becoming more prevalent. If healthcare professionals excessively rely on AI diagnoses without critically evaluating the results or conducting additional checks, there is a risk of misdiagnoses or missed medical conditions.

The risk of dependence on AI highlights the importance of maintaining a balance between leveraging the benefits of automation and retaining human oversight and expertise. It is crucial for individuals and organizations to understand the limitations of AI systems, regularly update and monitor their algorithms and establish protocols for human intervention when uncertainties or novel situations arise. Overdependence on AI without appropriate safeguards can result in unintended consequences and increased vulnerability to systemic failures.

Let us give you an illustrative example to demonstrate all this. Imagine that your boss asks you to use the cash available amount of the company in amount of 1 million USD for investment and to propose the best investment for the next 3 years. You will probably not say that you will consult with Chat GPT. If you ask chat GPT where to invest 1 M of USD you will get the theoretical answer that you can invest in stocks, bonds, real estate, ETF, mutual funds, currencies etc. At end of the answer Chat GPT will tell you to always conduct thorough research or seek professional advice before making significant investment decisions. In other words, AI will give you a basic theory of everything and then you have to think what the best investment policy for you is. However, your boss already knows that. What you are expected to say, based on knowledge acquired in school, is that any of such financial instrument has its risks and potential yields, legal, tax, accounting and financial impacts. You are supposed to know what are the acceptable operational, financial and investment risks for company, what are the tax implications, accounting impacts, cash flow needs and other specific or systemic risks to be considered. This must be done before any investment decision based on different financial instruments. This is something that AI cannot help you with. Your work can depend on use of AI and this is obviously not a problem, but if you do not understand the way the AI has obtained the results it can create many risks or problems. In school such as SSBM Geneva you will learn to use AI as supportive tool but always with a goal to understand the intrinsic values, processes and the obtained results.

6. Creativity and Innovation

AI may provide information necessary for creation and innovation, but the human mind is vital for synthesizing knowledge, fostering creativity, and driving innovation.

One example of a challenge in creativity and innovation when using AI is the issue of biased or unoriginal outputs from generative models. While AI systems, particularly those based on machine learning, have shown remarkable capabilities in generating content, there are instances where these systems exhibit biases present in their training data or produce outputs that lack true innovation.

AI models, including generative models, are trained on large datasets that may inadvertently contain biases present in society. If the training data reflects existing social, cultural, or gender biases, the AI system can learn and perpetuate those biases in its generated content.

Consider a language model trained on text data from the internet, where biases and stereotypes are prevalent. If prompted to generate text, the model might produce outputs that inadvertently reinforce or replicate those biases, potentially leading to inappropriate or discriminatory content.

As a conclusion AI lacks true innovation. While AI systems can generate content based on patterns learned from data, they may struggle with true innovation or the creation of entirely novel ideas that go beyond the scope of their training data.

The true innovation exists in schools, universities, laboratories, research centres, research paper publication web sites such as GBIS of SSBM  where students of SSBM, especially on MBA or DBA publish their research papers on daily basis: https://www.gbis.ch/index.php/gbis and many others.

7. Incomplete Understanding

AI may process data efficiently, but it might lack the holistic understanding and context that humans bring to the learning process.

An example of the problem of incomplete understanding when using AI can be found in natural language processing applications that struggle to comprehend the context, nuance, or intent behind human language.

AI systems, particularly chatbots or virtual assistants, may face difficulties in fully understanding the context of a conversation. They might misinterpret the meaning of user queries or fail to grasp the implications of specific words based on the broader context.

Imagine a user engaging with a customer support chatbot and asking a series of questions about a product. If the chatbot doesn’t consider the user’s previous queries or fails to understand the evolving context of the conversation, it might provide answers that seem irrelevant or unhelpful.

AI models may struggle to comprehend the nuanced aspects of human language, including sarcasm, humor, or implied meaning. This limitation can lead to misinterpretations and inappropriate responses.

A user might make a sarcastic comment or use humor in their interaction with a virtual assistant. If the AI system lacks the ability to recognize these nuances, it might respond literally or miss the intended tone, potentially causing confusion or frustration.

AI models might have difficulty grasping the true semantics of a sentence, especially when dealing with ambiguous language or multiple possible interpretations.

In a search query, a user might input a sentence with ambiguous terms, and the AI may struggle to determine the user’s actual intent. For instance, the query “book a flight to Paris” might be misinterpreted if the AI fails to consider additional context, such as the user’s location or preferred airline.

In a school like SSBM Geneva, you are supposed to have a complete understanding of different required skills (such as presentation skills, critical thinking, team work, management skills, marketing skills, finance skills, risk management and many others) and necessary knowledge to work in globalized economy subject to constant technological changes.

8. Unpredictability of Technology

AI is very much based on IT technology to create efficient algorithms and machine learning models. Reliance on AI introduces the risk of technical issues or malfunctions, disrupting the learning process unexpectedly.

The unpredictability of technology in the context of AI can manifest in various ways, and one example is the lack of transparency and understanding regarding how certain advanced machine learning models make decisions. This issue is particularly evident in complex models like deep neural networks.

Many state-of-the-art AI models, especially deep neutral networks, operate as complex, interconnected systems with numerous parameters. The decision-making process within these models can be difficult to interpret or explain, leading to unpredictability in understanding why a specific decision or prediction was made.

Consider a deep learning model used for credit scoring. If the model approves or denies a loan application, it might be challenging for the end-users or even the developers to precisely determine the factors or features that influenced the model’s decision. This lack of transparency can be a problem, especially in applications where accountability, fairness, and interpretability are crucial.

Black Box Phenomenon

Some advanced AI models, often referred to as “black box” models, are challenging to interpret because of their intricate architectures and the sheer volume of parameters. Understanding the internal workings of these models can be akin to looking into a black box where inputs go in, and outputs come out, but the processes in between are not easily explainable.

In the medical field, a deep learning model might be designed to assist in diagnosing diseases based on medical imaging data. While the model can achieve impressive accuracy, it might be challenging for healthcare professionals to comprehend how the model arrived at a specific diagnosis. This lack of interpretability raises concerns about trust and acceptance within critical applications.

In school, one of the main advantages is to learn to interpret the processes, the results, the risks and propose actions and corrective plans. The purpose of studying is to know what is between inputs and outputs, meaning the formulas, methodologies and analysis necessary to value the risks, the results and propose the action plans based on different assumptions, scenarios and sensitivity analysis.

Even if SSBM Geneva has a sophisticated high level technological and a modern IT platform for interaction with students and sharing of teaching content it is necessary to have a constant live interactions with students to understand their needs for help and monitor their progress (i.e. ticketing system or learning progress modules implemented by SSBM Geneva).

9. Ethical Considerations

The development and use of AI raise ethical questions that individuals should critically engage with, which might be neglected without studying.

One significant ethical consideration when using AI is the potential for algorithmic bias, which can lead to unfair or discriminatory outcomes, especially in sensitive areas such as hiring, lending, or law enforcement.

AI systems learn from historical data, and if the training data contains biases, the model may perpetuate or even exacerbate those biases in its predictions or decisions. This can result in unfair treatment of certain groups, reinforcing societal inequalities, and contributing to systemic discrimination.

Imagine an AI-powered hiring tool trained on historical data that reflects existing gender or racial biases in past hiring decisions. If the model is not carefully designed and validated, it might learn and replicate these biases, leading to the unfair exclusion of certain demographics from job opportunities.

Diverse and Representative Training Data

Ensuring that training data is diverse and representative of the population can help reduce biases. However, it’s crucial to recognize that achieving complete neutrality is challenging, and continuous monitoring and adjustments are necessary.

Explainability and Transparency

Providing explanations for AI decisions enhances transparency and allows users to understand how a model arrived at a specific outcome.

Ethical AI Guidelines and Standards

Establishing and adhering to ethical guidelines and standards in the development and deployment of AI systems is essential. Organizations and developers should prioritize fairness, accountability and transparency in their AI initiatives.

In school, there is a full consideration of what ethical codes or standards are defined by legislation, what are the current market benchmarks and future expectations. The associated risks, whether legal, financial, tax or others are considered and properly addressed and learned in schools.

Transparency and accountability are essential in ensuring that AI is used in a manner that benefits students and does not harm them. The ultimate goal should be to enhance the learning experience and improve learning outcomes, rather than replacing teachers or professors or compromising student privacy and security. Students must be informed about the data being collected, how it is used, and who has access to it. It is imperative to prioritize the ethical use of AI in learning to ensure that it serves the best interests of students and upholds their rights and privacy.

10. Personal Fulfillment

Learning is not just about acquiring information; it’s a deeply human pursuit that brings personal satisfaction, fulfilment, and a sense of achievement – aspects that AI cannot fully replicate.

One potential problem related to personal fulfilment when using AI is the risk of over-reliance on technology for personal growth and well-being, which might lead to a reduction in meaningful human experiences and connections.

As AI applications become more prevalent in various aspects of our lives, there is a risk that individuals may excessively rely on technology for self-improvement and personal fulfilment. This over-reliance might lead to a reduction in the pursuit of real-world experiences, face-to-face interactions, and personal challenges that contribute to genuine personal growth.

Example

Consider a scenario where an individual heavily relies on AI-powered life coaching apps or virtual assistants for guidance on decision-making, goal-setting, and emotional support. While these tools can offer valuable insights, an over-dependence on them might lead to a lack of authentic, human-driven experiences and connections. This could potentially result in a sense of emptiness or unfulfillment as genuine human interactions and experiences are neglected.

Addressing Personal Fulfilment Challenges

Balancing Technology Use

Encouraging a balanced approach to technology use is essential. While AI tools can offer support, individuals should also actively seek and engage in real-world experiences, personal relationships and activities that contribute to their well-being.

Mindful Technology Consumption

Practicing mindfulness and intentional use of technology helps individuals maintain control over their interactions with AI. Being aware of the potential impact on personal fulfilment and making conscious choices about when and how to use AI tools can contribute to a healthier relationship with technology.

Promoting Real-world Connections

Emphasizing the importance of face-to-face interactions, genuine relationships and shared experiences is crucial for personal fulfilment. AI should complement, not replace, the richness of human connections.

Lets just imagine real life cases where each member of family is looking in its mobile phone during the lunch or dinner. Another shocking example is when your child or you spend all day or night in front of play station or with AI tool instead of playing outside with children or spending time with your family or friends.

Holistic Well-being

Recognizing that personal fulfilment involves a holistic approach to well-being, including physical, mental, and emotional aspects, is essential. AI tools can play a supportive role, but they should not be seen as substitutes for a well-rounded and fulfilling life.

The challenge lies in finding a balance where AI technologies enhance personal growth without overshadowing the irreplaceable value of genuine human experiences and connections. As AI continues to evolve, it’s important to consider its role in the context of a broader, multidimensional understanding of personal fulfilment.

In essence, the collaboration between human intelligence and AI amplifies the learning process. Studying becomes a means of refining one’s cognitive abilities, leveraging the strengths of both human and artificial intelligence.

As we embrace the technological advancements brought by AI, it is essential to recognize the enduring value of human learning, ensuring a harmonious coexistence that maximizes the potential for intellectual growth.

This article has been written with the help or use of some information and input from Chat GPT but our own human analysis, emotional intelligence, and our capacity for creative comparison that comes from the genuine human learning and rich corporate and teaching experiences, has been largely used to compare AI advantages and disadvantages with studying in school. It has to be highlighted that we, at SSBM are not against the use of AI, we even use AI in different areas of teaching as supportive tool to help students and professors to learn. On top of that at SSBM we propose an MBA program specialized fully in AI (see https://www.ssbm.ch/online-mba-in-artificial-intelligence/). However, the aspects such as the human understanding, emotional intelligence, and the capacity for creativity that come from genuine human learning in school should be taken  into consideration and hopefully will always remain as key differentiators vs the technology driven school or education. The students who study at SSBM Geneva will acquire the complex concepts, enabling them to think critically, and they will be able to engage in analytical reasoning – skills that go beyond the capabilities of AI.

Moreover, our students and regular analysis of students’ questionnaires confirm that the act of studying fosters discipline, perseverance and a sense of accomplishment.

Critical thinking and engaging in analytical reasoning are essential skills where traditional schooling (such as SSBM Geneva) still has huge advantage over the capabilities offered by AI.

It is crucial to recognize that ChatGPT is not a substitute for human teachers or professors. While it can provide support and assistance, it cannot replicate the human element of teaching, which includes empathy, creativity, and adaptability to unique learning needs. Studying will not only give you a possibility to learn, progress but also to acquire a degree which is condition to find a job and give a possibility to develop your professional career in any company. AI will not give you any degree, certificate or diploma. It may give you some solid knowledge. However, one should wonder how many candidates have been hired by reputable companies without a degree or diploma obtained from traditional educational institutions. Statistics show a very limited number, most of such candidates work in their family companies or for their accounts.

So, we arrive at a final conclusion, echoing the words of Benjamin Franklin: ‘An investment in knowledge pays the best interest.

This article was published also as research commentary here.