Monday, 13 June 2022

Decision Management

Management Decision Making

Management decision-making is a critical part of the management planning function. Understanding the unique nature of managerial decisions requires understanding the types of decisions and the context for making those decisions.

Types or Categories of Management Decisions

Decision-making can be defined as selecting between alternative courses of action. Management decision-making concerns the choices faced by managers within their duties in the organization. Making decisions is an important aspect of planning. Decision-making can also be classified into three categories based on the level at which they occur.

Strategic Decisions

These decisions establish the strategies and objectives of the organization. These types of decisions generally occur at the highest levels of organizational management.

Tactical Decisions

Tactical decisions concern the tactics used to accomplish the organizational objectives. Tactical decisions are primarily made by middle and front-line managers.

Operational Decisions

Operational decisions concern the methods for carrying out the organization's delivery of value to customers. Operational decisions are primarily made by middle and front-line managers.

Decisions can be categorized based on the capacity of those making the decision.

Organizational Decisions 

An organizational decision is one that relates or affects the organization. It is generally made by a manager or employee within their official capacity. These decisions are often delegated to others.

Personal Decisions

Personal decisions are those primarily affecting the individual - though the decision may ultimately have an effect on the organization as a result of its effect on the individual. These types of decisions are not made within a professional capacity. These decisions are generally not delegated to others.

Areas of Decision Management

The goal of decision management is to enhance business operations intelligence by ensuring quick, consistent, and accurate fact-based decisions. The quality of structured operational decisions, no matter how complex, should be constantly improving. There are five areas that affect decision management:

Data and analytics: Data is accessed and processed with the help of descriptive, diagnostic, and predictive techniques. You need strong data quality as a basis for accurate decision-making, and the outcomes of those decisions affect the data as well.

Business Management Process: Managing human tasks and the sequence of business process automation and task management. The information from staff helps to make better decisions, and their roles are enhanced as a result.

Operations research: Optimizing and managing various goals based on standards and priorities that can be modeled. Decision management analyzes operations and suggests improvements that can be made.

Business rules management: Automating business rules and managing them based on inputs provided by subject matter experts.

Robotics: Using software to imitate human behavior in the automation of actions and related interactions with software systems.

Decision management results in efficiency and productivity, two critical factors for successful business operations. As a concept, decision management can be used in a wide number of industries, functions, and areas of business. There are so many businesses that make scores of operational decisions on a daily basis. The quality of these decisions has a direct impact on the effectiveness of the company. All decisions are impacted by data, regulations, market dynamics, and decision management—and therefore becomes a necessity.

Benefits of Decision Management

Better Utilization of Time

Regardless of the model of the decision management support system, research shows that it reduces the decision time cycle. Employee productivity is the immediate benefit from the time saved.

Better Efficacy

The effectiveness of decisions made with decision management is still debated because the quality of these decisions is hard to measure. Research has largely taken up the approach of examining soft measures like a perceived decision quality instead of objective measures. Those who advocate the creation of data warehouses are of the strong opinion that better and larger-scale analyses can definitely enhance decision-making.

Better Interpersonal Communication

Decision management systems open the door for better communication and collaboration among all decision-makers. Set rules ensure that all decision-makers are on a single platform, sharing facts and any assumptions made. Data-driven rule sets analyze and provide decision-makers with the best version of the possible outcome, encouraging fact-based decision-making. Better access to data always enhances the quality and clarity of decisions.

Cost Reduction

An outcome of good decision management rule sets is saving costs in labor (which comes from good decision-making, lowered infrastructure, and technological costs).

Better Learnings

In the long term, a by-product of decision management is that it encourages learning. There is more openness to new concepts, and a fact-based understanding of businesses, and the overall decision-making environment. Decision management can also come in handy to train new employees—an advantage yet explored in full.

Increased Organizational Control

With decision-making rule sets, a lot of transactional data is made available for constant performance checks and ad hoc inquiries by business heads. This gives management a better look at how business operations work. Managers find this to be a useful aspect of decision-making. There is a financial benefit to highly-detailed data, and this gradually becomes evident.

Disadvantages of Decision Management

As with any system, decision management systems can have a few disadvantages.

Information Overload

Considering the amount of data that goes through the system (and the fact that a problem is analyzed from multiple aspects), there are chances of information overload. With too many variables available on hand, the decision maker may be faced with a dilemma. Streamlined rule sets can help.

Over-Dependence

When decision-making is completely computer-based, it can lead to over-dependence. While it does free up man hours for better use of skills, it also increases dependency on computer-based decision-making. Individuals can be less inclined to think independently and come to rely on computers to think for them.

Subjectivity

One of the important aspects of decision-making is the number of alternatives that are offered based on objectivity. Subjectivity then tends to take a backseat, and this can affect decision-making and impact businesses. Things that cannot be measured cannot be factored in.

Overemphasis on Decision Making

Not all issues an organization is faced with need the power of decision management. An emphasis has to be placed on utilizing decision-making capabilities for relevant issues.

Types of Decision Support Systems for Decision Making

Decision support systems are classified into two types

Model-Based Decision Support Systems: These stand independent of any corporate information system. They work on the basis of strong theory or models and come with an excellent interface for easy interactivity

Data-Based Decision Support Systems: These set-ups collect large amounts of data from a variety of sources, store it in warehouses, and analyze it. The warehouse stores historical data and also comes with some reporting and query tools.

In data-based decision support systems there are two main techniques that are employed:

Online Analytical Processing (OLAP): Based on queries, this provides quick answers to some complex business needs. Managers and analysts can actively interact and examine data from multiple viewpoints.

Data Mining: By finding patterns and rules in existing data, useful decision-making information can be extracted to help in trend and consumer behavior patterns.






Sunday, 12 June 2022

Virtual Agents

A virtual agent (sometimes called an intelligent virtual agent (IVA), virtual rep, or chatbot) is a software program that uses scripted rules and, increasingly, Artificial Intelligence applications to provide automated service or guidance to humans.

Virtual agents are most commonly used by organizations in their customer service functions to answer routine customer queries, fulfill standard requests, and/or handle simple problems. For example, virtual agents are often used for initial customer interactions with call centers or click-to-chat features on websites. Virtual agents are also used in some organizations to handle employee-driven needs. For example, virtual agents are commonly deployed within the IT function to provide help desk-type services, such as employee requests for resetting computer passwords. They can also be used in organizations to guide employees through work tasks or processes. In this way, a virtual agent is akin to a digital assistant, an application program that understands natural language voice commands and is also deployed to fulfill people's needs or help them complete tasks.

Technology research and advisory firm Gartner predicted that 25% of customer services and support operations will use virtual assistants across their engagement channels in 2020, up from less than 2% in 2017. In addition, 25% of digital workers will use virtual assistants in their tasks on a daily basis by 2021, compared with less than 2% in 2019, according to Gartner.

Virtual agent vs. virtual assistant

The terms virtual agent and virtual assistant are often used interchangeably with each other, as well as with the term "chatbot." Although all three are types of computerized aid offered to serve people in various capacities, there are some subtle (although not definitive or universally accepted) distinctions between each of the terms. Virtual agent and virtual assistant are more closely aligned terms and, thus, more likely to be used interchangeably. However, many associates the term virtual assistant with Apple's  Siri, Amazon's Alexa, and Google Assistant -- all platforms that draw on the internet and other technologies to perform internet searches and digital tasks, such as updating calendars or checking weather forecasts in response to a user's request. The term virtual agent, on the other hand, is more commonly associated with organizational use, where agents are put to work assisting customers or employees. A chatbot is a specific type of virtual agent -- a conversational agent -- with capabilities to "chat" either via email or messaging or voice. However, the term "chatbot" does not encompass the wider array of virtual agent capabilities that might also include visual representations such as a hologram, as well as other additional characteristics beyond verbal communication. The term "virtual agent" can also refer to a human agent who works remotely from his or her employer's location to serve customers.

How virtual agents work

Virtual agent technologies initially emerged in the first decade of the 2000s. At the most basic level, virtual agent technologies work on a preprogrammed scripted model. Organizations could create virtual agents that were scripted to respond in specific ways to specific human requests. Organizations generally identified the particular workflows that would be handled by the virtual agents, mapping out what a virtual agent should do based on each specific request or inquiry made by a person. Organizations then created the scripts to have the agent respond as needed to each request, which the agent could identify by predetermined keywords that had been programmed into the platform. In other words, the virtual agent would identify the keywords and respond with the scripted response that in its computerized analysis best matches the keywords.

As such, these virtual agents could handle routine tasks where an inquiry or request could be met with a predictable response. Organizations programmed their virtual agents to turn over the customer interaction to human agents when requests hit a certain point in the workflow or when the inquiries digressed from the script. In the second decade of the 2000s, particularly toward the latter half, virtual agent platforms incorporated machine learning, natural language processing, and artificial intelligence to create intelligent virtual agents that could handle more types of queries, as well as less predictable inquiries, requests, and workflows. These intelligent virtual agent platforms can also connect with back-end systems, thereby providing more personalized responses to the customers or employees who are interacting with the agent systems. Moreover, the AI capabilities built into these platforms enable these agents to "learn," so they can become more efficient and effective as they work, and they can also develop the capacity to handle a wider range of tasks.

Virtual agent capabilities

As virtual agent software has improved in the second half of the 2010s with advances in AI and cognitive computing programs, virtual agents have moved far beyond interactive voice response IVR  systems. In fact, technological advances have enabled virtual agents to understand customer intent and can provide personalized answers to customer questions in a human-like manner. However, virtual agents still typically communicate with customers via email or live chat on corporate websites. Companies may also use an avatar to provide a visual representation of the virtual agent. Additionally, most companies as of 2020 still use virtual agents to handle highly repeatable tasks. For complicated tasks, live customer service agents are required. In the world of customer relationship management CRM  software, virtual agents are used to providing 24/7 customer service, including answering questions on accounts, helping with a password, providing recommendations, or following up on sales and marketing leads via email correspondence. For example, a virtual sales agent can be used to email potential customers to request a meeting with a live sales agent. When a customer agrees to a meeting, the virtual agent can obtain a phone number and collect the information a sales rep might need to conduct a live conversation. This is enormously useful for sales and marketing teams, as they typically only focus on leads deemed "high quality." With a virtual agent, all leads can be followed up on, which could result in higher sales. In addition, virtual agents cost significantly less than human employees.

How to use a virtual agent

Companies interested in adopting virtual agent software through a cloud service provider or software vendor must invest time and resources into "training" the virtual agent. This initial setup period may take months to complete, depending on the level of confidence the company desires. Virtual agents are based on machine learning technology, which improves over time as the system ingests more data and "learns" through continued use. Virtual agents can only provide information that has been "fed" to the AI system, and if the system contains bad data, customers will receive false information. This makes the setup phase critical. The initial time investment is worthwhile when it results in reduced call volume and frees up live agents to focus on complex customer service tasks, while simultaneously providing a  good customer experience.





Saturday, 11 June 2022

Speech Recognition & History of Voice Recognition Technology

What is Speech Recognition

Speech recognition software is a computer program that’s trained to take the input of human speech, interpret it, and transcribe it into text.

How Does It Work

Speech recognition software works by breaking down the audio of a speech recording into individual sounds, analyzing each sound, using algorithms to find the most probable word fit in that language, and transcribing those sounds into text. Speech recognition software uses natural language processing (NLP) and deep learning neural networks. “NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way,” according to the algorithms blog. This means that the software breaks the speech down into bits it can interpret, converts it into a digital format, and analyzes the pieces of content. From there, the software makes determinations based on programming and speech patterns, making hypotheses about what the user is actually saying. After determining what the users most likely said, the software transcribes the conversation into text. This all sounds simple enough, but the advances in technology mean these multiple, intricate processes are happening at lightning speed. Machines can transcribe human speech more accurately, correctly, and quickly than humans can.

Speech Recognition & AI Software

Voice recognition and transcription technology have come a long way since their first inception. We now use voice recognition technology in our everyday lives with voice search on the rise, more people are using assistants like Google Home, Siri, and Amazon Alexa.

History of Voice Recognition Technology

Programmers and engineers have made great leaps in the science of voice recognition over the past decade, so you’d be forgiven for thinking that this technology is a relatively new development. Much of the reporting and scholarship around voice recognition tech only focuses on the post-2011 Age of Siri, following the release of Apple’s now-ubiquitous personal assistant.

But there’s a rich secret history to voice recognition tech that stretches back to the mid-20th-century, to those early days when rudimentary computers needed to fill an entire warehouse with vacuum tubes and diodes just to crunch a simple equation. And this history not only reveals some interesting trivia about the technology we know and love today, but it also points the way toward potential future breakthroughs in the field. Let’s explore the untold story of voice recognition technology, and see how much progress has been made over the years (and how much has stayed the same).

AUDREY and the Shoebox

In the early 20th century, the U.S. research firm Bell Laboratories (named after founder Alexander Graham Bell, the inventor of the telephone) racked up a string of impressive technological advances: The invention of radio astronomy (1931), solar batteries (1941), and transistors (1947). Then in 1952, Bell Labs would mark another groundbreaking technological advancement: The Audrey system a set of vacuum-tube circuitry housed in a six-foot-high relay rack that could understand numerical digits spoken into its speaker box. When adapted to a specific speaking voice, AUDREY could accurately interpret more than 97% of digits spoken to it. AUDREY is no doubt primitive by today’s standards, but it laid the groundwork for voice-dialing, a technology that was widely used among toll-line operators. (Remember those?)Ten years later, IBM unveiled its shoebox machine at the 1962 World Fair in Seattle. Like AUDREY, Shoebox could understand up to 16 words, including the digits 0 through 9. And when Shoebox heard a number combined with a command word (like “plus” or “total”), it would then instruct a linked adding machine to calculate and print the answer to simple arithmetic problems. Just like that, the world’s first calculator powered by voice recognition was born!

HARPY takes wing

Voice recognition began to take off as a field in the 1970s, thanks in large part to interest and funding from the U.S. Department of Defense and DARPA. Running from 1971 to 1976, DARPA’s Speech Understanding Research (SUR) program was one of the largest research initiatives ever undertaken in the field of voice recognition.

SUR ultimately helped created Carnegie Mellon’s Happy voice recognition system, which was capable of processing and understanding more than 1,000 words. HARPY was particularly significant due to its use of “beam search” technology, which was a far more efficient method for machines to retrieve the meaning of words from a database and better determine the structure of a spoken sentence. Indeed, advances in voice recognition have always been closely tied to similar strides in search engine tech — look no further than Google’s current dominance in both fields for proof-positive of this fact.

From recognition to prediction

By the 1980s voice recognition tech had begun to advance at an exponential rate, going from simple machines that could understand only dozens or hundreds of spoken words, to complex networked machines that could comprehend tens of thousands. These advances were largely powered by the development of the hidden Markov model (HMM), a statistical method that allowed computers to better predict whether a sound corresponds to a word, rather than trying to match the sound’s pattern against a rigid template. In this way, HMM enabled voice recognition machines to greatly expand their vocabulary while also comprehending more conversational speech patterns. Armed with this technology, voice recognition began to be adopted for commercial use and became increasingly common in several specialized industries. The 1980s is also when voice recognition began to make its way into home consumer electronics, like with World of Wonder’s 1987 “Julie” doll, which could understand basic phrases and reply back. 

Voice recognition goes mainstream

In 1990, we saw the release of the very first consumer-grade voice recognition product: Dragon Dictate, priced at $9,000 (that’s $17,000 in 2017 dollars). Following this, Dragon Dictate’s 1997 successor, Dragon NaturallySpeaking, was the first commercial voice recognition program that could understand the natural speech of up to 100 words per minute.

1997 also saw the release of BellSouth’s VAL, the very first “voice portal.” VAL was an interactive system that could respond to questions over the phone, laying the groundwork for the same technology powering the voice-activated menus you hear today when calling your bank or ISP. But after more than 40 years of advancement after advancement in voice recognition technology, developments in the field stalled out from the mid-1990s through to the late 2000s. At the time, voice recognition programs had hit a ceiling of about 80% accuracy in recognizing spoken words due to the HMM underpinning speech technology.

Google, Siri, and the voice recognition revolution

Apple’s iPhone had already made waves when it came out in 2007, as tech began to re-orient itself towards an increasingly smartphone-centric and mobile-first future. But with the release of the Google Voice Search App for the iPhone in 2008, voice recognition technology began to once again make major strides. In many ways, smartphones proved to be the ideal proving grounds for the new wave of voice recognition technology. Voice was simply an easier and more efficient input method on devices with such small screens and keyboards, which incentivized the development of hands-free technology.

But even more significantly, the design principles google laid down with Voice Search in 2008 continue to define voice recognition technology to this day: The processing power necessary for voice recognition could be offloaded to Google’s cloud data centers, enabling the kind of high-volume data analysis capable of storing human speech patterns and accurately matching words against them. Google’s approach was then perfected by Apple in 2011 with the release of Siri, an AI-driven personal assistant technology that likewise relies on cloud computing to predict what you’re saying. In many ways, Siri is a prime example of Apple doing what it does best: Taking existing technology and applying a mirror-sheen of polish to it. Siri’s easy-to-use interface combined with her sparkling ‘personality’ and Apple’s expert marketing of the iPhone helped make the program nearly ubiquitous.

The Potential Variables in Speech Recognition Software

“Correctness and accuracy are two different things,” says CallRail Product Manager, Adam Hofman. the difference lies in that correctness means completely “free from error” while accurate means “correct in all details” and “capable of or successful in reaching the intended target.”With speech recognition, this means that while the transcription may not be 100% correct (some words, names, or details might be mistranscribed), the user understands the overall idea of the chunk of speech that’s been transcribed. That is to say, it’s not just a jumble of random words–but that a cohesive concept can be interpreted from the text, in general. However, no two people are alike, and therefore, speech patterns and other deviations must be taken into account. Anomalies like accents (even those across English as native language speakers) can cause speech recognition software to miss certain aspects of conversations. The ways in which speakers enunciate versus mumble, the speeds at which they speak, and even fluctuations in speaker voice volume can throw speech recognition technology for a loop.

Regardless, most modern speech recognition technologies work along with machine learning platforms. Hence, as a user continues to use the technology, the software learns that particular person’s speech patterns and variances and adjusts accordingly. In essence, it learns from the user. CallRail’s voice recognition technology is used in conversation intelligence features like CallScore, Automation Rules, and Transcriptions.

The Benefits of Using Speech Recognition Software

Though speech recognition technology falls short of complete human intelligence, there are many benefits of using the technology–especially in business applications. In short, speech recognition software helps companies save time and money by automating business processes and providing instant insights into what’s happening in their phone calls. Because software performs the tasks of speech recognition and transcription faster and more accurately than a human can, it means it’s more cost-effective than having a human do the same job. It can also be a tedious job for a person to do at the rate at which many businesses need the service performed. Speech recognition and transcription software costs less per minute, than a human performing at the same rate, and never gets bored with the job.































































































Friday, 10 June 2022

Natural Language Generation

Natural language processing (NLP) is an area of computer science and artificial intelligence concerned with the interaction between computers and humans in natural language. The ultimate goal of NLP is to help computers understand language as well as we do. It is the driving force behind things like virtual assistants, speech recognition, sentiment analysis, automatic text summarization, machine translation, and much more. In this post, we'll cover the basics of natural language processing, dive into some of its techniques, and also learn how NLP has benefited from recent advances in Deep Learing.

Introduction 

Natural language processing (NLP) is the intersection of computer science, linguistics, and machine learning. The field focuses on communication between computers and humans in natural language and NLP is all about making computers understand and generate human language. Applications of NLP  techniques include voice assistants like Amazon's Alexa and Apple's Siri, but also things like machine translation and text-filtering.

NLP is Artificial Intelligence or Machine Learning or a Deep Learning

The answer is here. The question itself is not fully correct! Sometimes people incorrectly use the terms AI, ML, and DL. Why not simplify those first and then come back.

Clearing the Confusion: AI vs. Machine Learning vs. Deep Learning Differences

The commencement of modern AI can be traced to classical philosophers’ attempts to describe human thinking as a symbolic system. But the field of AI wasn’t formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term “artificial intelligence” was coined.

NLP: How Does NLP Fit into the AI World

With a basic understanding of Artificial Intelligence, Machine Learning, and Deep Learning, let's revisit our very first query NLP is Artificial Intelligence or Machine Learning or Deep Learning? The words AI, NLP, and ML (machine learning) are sometimes used almost interchangeably. However, there is an order to the madness of their relationship. Hierarchically, natural language processing is considered a subset of machine learning while NLP and ML both fall under the larger category of artificial intelligence.

Natural Language Processing combines Artificial Intelligence (AI) and computational linguistics so that computers and humans can talk seamlessly.

NLP endeavors to bridge the divide between machines and people by enabling a computer to analyze what a user said (input speech recognition) and process what the user meant. This task has proven quite complex.

To converse with humans, a program must understand syntax (grammar), semantics (word meaning), and morphology (tense), pragmatics (conversation). The number of rules to track can seem overwhelming and explains why earlier attempts at NLP initially led to disappointing results. With a different system in place, NLP slowly improved moving from a cumbersome-rule based to a pattern learning-based computer programming methodology. Siri appeared on the iPhone in 2011. In 2012, the new discovery of the use of graphical processing units (GPU) improved digital neural networks and NLP.

NLP empowers computer programs to comprehend unstructured content by utilizing AI and machine learning to make derivations and give context to language, similar as what human brains do. It is a device for revealing and analyzing the “signals” covered in unstructured information. Organizations would then be able to get a deeper comprehension of public perception around their products, services, and brand, just like those of their rivals.

Now Google has released its own neural-net-based engine for eight language pairs, closing much of the quality gap between its old system and a human translator and fuelling increasing interest in the technology. Computers today can already produce an eerie echo of human language if fed with the appropriate material. Over the past few years, Deep Learning (DL) architectures and algorithms have made impressive advances in fields such as image recognition and speech processing. Their application to Natural Language Processing (NLP) was less impressive at first but has now proven to make significant contributions, yielding state-of-the-art results for some common NLP tasks. Named entity recognition (NER), part of speech (POS) tagging or sentiment analysis are some of the problems where neural network models have outperformed traditional approaches. The progress in machine translation is perhaps the most remarkable of all.

NLP is not Just About Creating Intelligent bots

NLP is a tool for computers to analyze, comprehend, and derive meaning from natural language in an intelligent and useful way. This goes way beyond the most recently developed chatbots and smart virtual assistants. In fact, natural language processing algorithms are everywhere from search, online translation, spam filters, and spell checking.

Components of NLP

NLP can be divided into two basic components.

Natural Language Understanding

Natural Language Generation

Natural Language Understanding (NLU)

NLU is naturally harder than NLG tasks. Really? Let’s see what are all challenges faced by a machine while understanding. There are lot of ambiguity while learning or trying to interpret a language.

Lexical Ambiguity can occur when a word carries different sense, i.e. having more than one meaning and the sentence in which it is contained can be interpreted differently depending on its correct sense. Lexical ambiguity can be resolved to some extent using parts-of-speech tagging techniques.

Syntactical Ambiguity means when we see more than one meaning in a sequence of words. It is also termed as grammatical ambiguity.

Referential Ambiguity: Very often a text mentions as entity (something/someone), and then refers to it again, possibly in a different sentence, using another word. Pronoun causing ambiguiyty when it is not clear which noun it is refering to

Natural Language Generation (NLG)

It is the process of producing meaningful phrases and sentences in the form of natural language from some internal representation.

Text planning  

It includes retrieving the relevant content from the knowledge base.

Sentence planning 

It includes choosing required words, forming meaningful phrases, and setting the tone of the sentence.

Text Realization 

It is mapping sentence plans into sentence structure.

Levels of NLP

In the previous sections, we have discussed different problems associated with NLP. Now let us see what are all the typical steps involved while performing NLP tasks. We should keep in mind that the below section describes some standard workflow, it may however differ drastically as we do real-life implementations basis on our problem statement or requirements.

Phonological Analysis: 

This level is applied only if the text's origin is a speech. It deals with the interpretation of speech sounds within and across words. Speech sound might give a big hint about the meaning of a word or a sentence

Morphological Analysis: 

Deals with understanding distinct words according to their morphemes ( the smallest units of meanings). Take, for example, the word: “unhappiness ”. It can be broken down into three morphemes (prefix, stem, and suffix), with each conveying some form of meaning: the prefix un- refers to “not being”, while the suffix -ness refers to “a state of being”. The stem happy is considered as a free morpheme since it is a “word” in its own right. Bound morphemes (prefixes and suffixes) require a free morpheme to which they can be attached to, and can therefore not appear as a “word” on their own.

Lexical Analysis:

It involves identifying and analyzing the structure of words. Lexicon of a language means the collection of words and phrases in a language. Lexical analysis is dividing the whole chunk of text into paragraphs, sentences, and words. In order to deal with lexical analysis, we often need to perform Lexicon Normalization.

Stemming: Stemming is a rudimentary rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “s” etc) from a word.

Lemmatization: Lemmatization, on the other hand, is an organized & step by step procedure of obtaining the root form of the word, it makes use of vocabulary (dictionary importance of words) and morphological analysis (word structure and grammar relations).

Syntactic Analysis:

 Deals with analyzing the words of a sentence so as to uncover the grammatical structure of the sentence. E.g.. "Colourless green idea." This would be rejected by the Symantec analysis as colorless here; green doesn't make any sense. Syntactical parsing involves the analysis of words in the sentence for grammar and their arrangement in a manner that shows the relationships among the words. Dependency Grammar and Part of Speech tags are the important attributes of text syntactic.

Semantic Analysis:

Determines the possible meanings of a sentence by focusing on the interactions among word-level meanings in the sentence. Some people may think it’s the level that determines the meaning, but actually, all the levels do. The semantic analyzer disregards sentences such as “hot ice cream”.

Discourse Integration:

Focuses on the properties of the text as a whole that convey meaning by making connections between component sentences. It means a sense of the context. The meaning of any single sentence depends upon that sentence. It also considers the meaning of the following sentence. For example, the word "that" in the sentence "He wanted that" depends upon the prior discourse context.

Pragmatic Analysis: 

Explains how extra meaning is read into texts without actually being encoded in them. This requires much world knowledge, including the understanding of intentions, plans, and goals. Consider the following two sentences:

The city police refused the demonstrators a permit because they feared violence.

The city police refused the demonstrators a permit because they advocated revolution.

The meaning of “they” in the 2 sentences is different. In order to figure out the difference, world knowledge in knowledge bases and inference modules should be utilized. Pragmatic analysis helps users to discover this intended effect by applying a set of rules that characterize cooperative dialogues. E.g., "close the window?" should be interpreted as a request instead of an order.









Thursday, 9 June 2022

Cybersecurity & Importance of Cybersecurity

Every square IS a rectangle because a square is a quadrilateral with all four angles being right angles. Similarly, cybersecurity IS a part of the IT security umbrella, along with its counterparts, physical security and information security. But not every rectangle is a square, since the criteria to qualify as a square means all sides must be the same length. The point is, that not all IT security measures qualify as cybersecurity, as cybersecurity has its own distinct assets to protect. Of course, the threat to these electronic assets is hackers who have malicious intent to steal proprietary data and information via data breaches. Thus, it would seem the fully realized definition should include an evolving set of cybersecurity tools designed to protect confidential data from unauthorized access. To do so, it’s necessary to consider how people, processes, and technology all play equally important roles in keeping information safe.

What Is IT Security?

Physical security:

 Focuses on how you keep people and infrastructure safe. In this category, you focus on securing buildings, server rooms, and wiring closets. You focus on proper lighting for buildings and parking lots, for example. It also involves understanding how to use camera guards, as well as actual guards and even guard dogs.

Information security:

 Focuses on keeping all data and derived information safe. This includes physical data (e.g., paper, computers) as well as electronic information. In this category, individuals focus on data backups, as well as monitoring techniques to make sure that no one has tampered with data or exfiltrated information. This category focuses less on the actual equipment and computing resources because it focuses on the data itself. And, yes, I’m distinguishing between data and information: data is raw and unprocessed. Information is derived from data after quite a bit of scrubbing, processing, and handling.

Cybersecurity:

 Focuses on protecting electronic assets – including the Internet, WAN, and LAN resources – used to store and transmit that information. Cybersecurity tends to focus on how malicious actors use these resources to attack information. Those individuals interested in cybersecurity are the ones interested in making sure that hackers can’t use electronic means to gain improper access to data and information.

Cybersecurity Important

One of the many advantages of living in a world where every device is connected is convenience. It’s incredibly easy to conduct work, manage your social calendar, shop and make appointments from your smartphone or device. That’s why it’s become second nature to many of us. But, of course, the convenience of connected data also means threats from bad actors can do a lot of damage. Cybersecurity initiatives are essential to protecting our data and thus, our way of life.


Types of Cybersecurity

Critical infrastructure security

Application security

Network security

Cloud security

Internet of Things (IoT) security


Cybersecurity risk

Suspicious URLs and Emails: 
Explain to employees that if something looks strange – it probably is! Encourage staff to pay attention to URLS delete emails that don’t have content or look like they are coming from a spoofed address, and stress the importance of guarding personal information. As an IT professional, it’s your responsibility to raise awareness of potential cybersecurity threats.

Password Idleness: 
We know that holding on to the same password for ages isn’t a great idea. But, Bob in finance may not understand that We all carry a plethora of passwords and since it’s a best practice not to duplicate your passwords, it’s understandable that some of us need to write them down somewhere. Provide suggestions on where to store passwords.

Personally Identifiable Information: 
Most employees should understand the need to keep personal browsing, like shopping and banking tasks, to their own devices. But everybody does a bit of browsing for work, right? Emphasize the importance of keeping an eye on what websites may lead to others. And, that includes social media. Karen in customer service may not realize that sharing too much on Facebook, Twitter, Instagram, etc. (like personally identifiable information) is just one-way hackers can gather intel.

Backups and Updates:
 It’s fairly easy for unsavvy tech consumers to go about their daily business without backing up their data regularly and updating their system’s anti-virus. This is a job for the IT department. The biggest challenge here is getting employees to understand when they need your help with these items.

Physical Security for Devices:
 Think about how many people in your office leave their desks for meetings, gatherings, and lunch breaks. Are they locking their devices? Highlight the need to protect information each and every time a device is left unattended. You can use the airport analogy. Airport staff is constantly telling us to keep track of our bags and never leave them unattended. Why? Well, because you just don’t know who is walking by. Encourage employees to protect their devices with as much care as they protect their baggage.

Processes

When employees outside of the IT department are trained, IT pros can focus on the process. The processes by which cybersecurity professionals go about protecting confidential data are multi-faceted. In short, these IT pros are tasked with detecting and identifying threats, protecting information, and responding to incidents as well as recovering from them. Putting processes into place not only ensures each of these buckets is being continuously monitored, but if cybersecurity attacks happen, referencing a well-documented process can save your company time, money, and the trust of your most valuable asset – your customers. The National Institute of Standards and Technology (NIST) under the U.S. Commerce Department has developed the cybersecurity framework for private-sector companies to use as a guide in creating their own best practices. The standards were compiled by NIST after former U.S. President Barack Obama signed an executive order in 2014. It’s a great resource to use as you work to combat your cybersecurity risk.

Technology

1) The technology you’ll use to prevent and combat cybersecurity attacks, like DNS filtering, malware protection, antivirus software, firewalls, and email security solutions.

2) The technology your data lives on that needs your protection, like computers, smart devices, routers, networks & the cloud.

Back in the day, cybersecurity initiatives focused on defensive measures inside the boundaries of traditional tech. But today, policies, like Bring Your Own Device (BYOD), have blurred those lines and handed hackers a much broader realm to penetrate. Remembering cybersecurity basics like locking all of your doors, windows, elevators, and skylights will keep you from joining the cyber-crime statistics.


Cybersecurity Threats


Malware
Malware is software that has been created to intentionally cause damage. Commonly known as a virus (among other things), malware can cause harm simply by opening the wrong attachment or clicking on the wrong link.

Ransomware
Ransomware is actually a type of malware. The difference here is that ransomware infects a network or steals confidential data and then demands a ransom (typically currency of some sort) in exchange for access to your systems.

Phishing Attacks
Phishing is just like it sounds. Hackers throw a line out there hoping that you’ll bite, and when you do, they steal sensitive information like passwords, credit card numbers, and more. Phishing attacks usually come in the form of emails that look legitimate and encourage you to reply.

Social Engineering
Social engineering involves malicious human interaction. This is a case of people outright lying and manipulating others to divulge personal information. Often, these people obtain information from social media profiles and posts.












Wednesday, 8 June 2022

Security Analyst

What is a security analyst?

Security analysts can work across the spectrum of computer and information networks. From corporate databases to banking networks and from office networks to military intelligence, security analysts can be found anywhere that large amounts of information are being stored, shared, or used by a computer. A well-trained security analyst will probably have an advanced understanding of both hardware and software, and how data is stored and managed. The top three cybersecurity threats facing information technology networks of all scales are ransomware attacks, social engineering attacks, and the theft or compromise of sensitive information from inside an organization or company. This means that cybersecurity analysts need to be prepared to deal with a wide variety of threats and look for security issues both internally and externally. 

Security analyst skills

The cybersecurity skills and expertise required will vary widely from job to job. Some security analyst job openings are looking for people with specific skills, like managing a proprietary network, or training other employees, or conducting penetration testing on the organization’s digital properties. For students and early career professionals, one key tactic to understand what the most valuable and marketable job skills is to study a number of different job descriptions at various kinds of employers. That research should start to show trends and reveal how current skills and experience map to opportunities. Understanding the current job market will also help make decisions about what kinds of security certifications and degrees to pursue.

Despite the wide variability of cybersecurity analyst jobs, there are a number of characteristics or skills that are common to the profession.

1) Programming experience. Since cybersecurity analysts are charged with defending digital infrastructure, it makes sense that cybersecurity professionals are expected to have some base level of programming experience and familiarity with foundational programming languages such as C, C++, PHP, Perl, and Java. 

2) Analytical. Security analysts work with data from a variety of sources and so they need to be able to take information from many inputs to make decisions.

3) Detail-oriented. Much of the work of cybersecurity analysts comes down to examining small details — like lines of computer code or network commands — that could have major impacts.

4) Forward-thinking. Security analysts need to anticipate the future based on current data and trends and then make security recommendations accordingly.

What do security analysts do?

A security analyst often wears a lot of hats in an organization or company. They might be part police officers, part systems designers, part policymakers, and part trainers or teacher. 

While security analysts could be thought of as the first responders of the digital world, as they are often called on to triage security incidents the reality is that most cybersecurity analysts spend their time developing policies and procedures and reviewing best practices, and providing training for company or organization employees. In an optimal world, a security analyst is taking proactive steps to prevent security breaches in the first place. That could mean constantly updating systems and software, or it could mean actively looking for ways to knock down firewalls or find loopholes in computer code or operating systems. 

Outlook for security analysts

As the digital transformation across industries and sectors grows, security analysts will continue to be in high demand. Traditional cybersecurity analysts worked in niche industries related to vulnerable sectors of the economy such as the military, finance and banking, and the oil and gas companies. But now, consumer companies and everyday brands are hiring cybersecurity analysts to help protect their companies and operations. The need for well-trained cybersecurity analysts is so pronounced that the Bureau of Labor Statistics (BLS) predicts that employment opportunities for cybersecurity professionals will grow by 36 percent through 2024. The projected rate of job growth is much faster than other sectors of the economy and is projected to continue to grow into the next several years as more companies and organizations begin investing in cybersecurity positions.





Microsoft Thwarts Chinese Cyber Attack Targeting Western European Governments

  Microsoft on Tuesday   revealed   that it repelled a cyber attack staged by a Chinese nation-state actor targeting two dozen organizations...