In this section
Demystifying Traditional (Symbolic) AI
Before the rise of chatbots, neural nets, and self-learning systems, there was a different kind of AI running the show. It's called Traditional AI, also lovingly called Symbolic AI or GOFAI (Good Old-Fashioned AI). Think of it as the “rulebook brain” of artificial intelligence—crisp, logical, and very human in its reasoning.
Back in the 1980s, a program named MYCIN helped doctors diagnose bacterial infections. It asked a series of questions and followed a tree of logical rules, all written by experts to recommend treatments. No machine learning, no neural networks. Just raw, rule-based intelligence. And guess what? It performed as well as real doctors in some tests!
Another early AI program, ELIZA, mimicked a psychotherapist by following scripted patterns. It wasn’t "intelligent" by today’s standards, but people formed emotional connections with it, proving that even simple logic-based machines can feel... human.
What is Symbolic AI?
Let's understand traditional AI
Symbolic AI is built on logic, symbols, and explicit rules - the kind of thinking you'd find in old detective stories, where reasoning, deduction, and facts solved the case. It tries to mirror how we think logically and solve problems using knowledge and structure.
Imagine teaching a computer like you'd teach a child using flashcards and if-then rules:
If the light is red, then stop OR if it’s raining, then carry an umbrella.
Symbolic AI uses
Logic-based rules: Think “if-then” statements.
Knowledge bases: Structured libraries of facts (like encyclopedias for machines).
Ontologies and inference engines: To connect the dots and deduce new information.
It’s like giving a computer a manual, and it follows the instructions perfectly.
Why are we still talking about Symbolic AI?
Even though machine learning and Deep Learning have stolen the spotlight, Symbolic AI still matters big time:
Regulatory systems and expert logic: Many industries, such as law, finance, and medicine, still need explainable AI, where decisions can be traced back to clear rules, not just mathematical weights in a black box.
Security and Policy Enforcement: In cybersecurity and compliance, symbolic reasoning is used to enforce policies, detect rule violations, and make logic-based decisions.
Hybrid AI: Today’s smartest systems often combine both worlds: machine learning for pattern recognition and symbolic AI for reasoning.
Core components of Symbolic AI
Before AI could think, it needed to know. Symbolic AI used structures like:
Knowledge representation: This will give us a view of how the machine "knows" things.
Inference engines: These are the systems that let AI reason, deduce, and draw conclusions like a tiny digital Sherlock Holmes.
Planning systems: When you give a machine a goal, like solving a puzzle or making coffee, it needs a plan. Enter planning systems like:
STRIPS (Stanford Research Institute Problem Solver): A legendary planner that broke down big goals into smaller, achievable steps.
OR Goal Trees: Like decision trees, but reversed, starting with a goal and asking, “What do I need to do to get there?”
Strengths of traditional AI
Transparency and interpretability: With symbolic AI, you can see every decision, every rule, and every reason. It’s like reading a clear “thought process” on paper.
Deterministic and predictable: Symbolic AI behaves exactly the same way every time, given the same input. There are no surprises, no randomness, and no mystery.
Strong in rule-based domains: Traditional AI shines in domains with clear rules and structured knowledge. For example, in law, AI helps parse legal documents and spot inconsistencies using rule-based logic.
Easy to debug and verify: When something goes wrong in a Symbolic AI system, you don’t need a data scientist with a higher degree in neural networks; you need someone who understands logic. Symbolic AI is like Lego. If something breaks, you can see where and snap it back into place.
In high-stakes, rule-bound environments, it’s still the go-to brain not because it learns, but because it knows what it’s doing. Sometimes, the best AI isn’t the one that guesses, it’s the one that explains.
Limitations of Traditional AI
Traditional (Symbolic) AI is fantastic at what it was built for, operating in well-defined, rule-based systems where logic and traceability matter. But when the world gets unpredictable, Symbolic AI falters. It struggles with ambiguity using vague terms like “soon” or “near” leaving it confused. Unless every edge case is painstakingly defined, it can’t learn from patterns or make sense of messy, unstructured data like a modern machine learning model can. It’s also brittle; even small changes in input can break it, and every update requires domain experts to rewrite complex rule sets. But this rigidity, ironically, is a strength in domains like code security.
In this space, especially when tracing call paths or identifying vulnerabilities, consistency, explainability, and zero tolerance for hallucinations aren’t just nice-to-haves; they’re critical. That’s why Symbolic AI remains a powerful ally in tools like Snyk; it doesn’t guess, it reasons. And when the cost of a wrong answer is lost developer time or a missed vulnerability, reason wins every time.
Snyk’s view on Symbolic AI
Combining Symbolic AI, Machine Learning (ML), and Large Language Models (LLMs), all focused on one developer security workflow, enables results unattainable with a single model.

In this example, Symbolic AI is used to allow us to test for issues based on an abstracted understanding of the data flow. This makes for more accurate results as you are testing for flows, sources, sanitisers, and sinks rather than string matches. We use ML to generate new rules that we can apply to the searches, increasing accuracy and coverage over time. When a vulnerability is found, we use LLM to generate a fix, but then put a ‘version’ of the fixed code through a test before even showing it to the user to ensure the fix works and doesn’t create new problems.
If you only did one of those things, which just the LLM fix or just the ML, you could not provide the real-time solution of a fast yet complete scan in the IDE, detected issues, and suggested fix all in one flow. That would require either the developer to repeat steps killing productivity they were hoping to gain using AI, or possibly applying a ‘fix’ that in fact was itself insecure.
The integrated approach gives the best results by using the right tool at the right time. We track every state of the variable and what is happening to it, like write after read, or mutex lock twice etc. Those rules have then been evaluated and checked.
Symbolic AI was like teaching a machine using textbooks, flowcharts, and rulebooks giving it structure, order, and logic. While it couldn’t learn from data like modern AI, it could explain its thinking, and that’s something today’s black-box models still struggle with.
In the end, "If deep learning is the instinct, symbolic AI is the wisdom."
Start securing AI-generated code
Create your free Snyk account to start securing AI-generated code in minutes. Or book an expert demo to see how Snyk can fit your developer security use cases.