The CIA triad exposed by a Security adventurer

A fundamental concept in information security is the CIA Triad. Don’t worry, it has nothing to do with a joint team between secret agents and some mafia stuff . In our context, CIA stands for Confidentiality Integrity Availability. Those three ideas form the primary goal when thinking about securing an infrastructure.
When a company is spending money to secure data or assets, the purpose is to achieve some level of CIA. Hopefully, people in charge know that and are not spending money without a target.

Bottom line is that the CIA Triad is a model used to guide how to define and evaluate levels of protection. As a security professional, you will generally evaluate security controls based on how well they tackle one or many of those concepts. If any of the principle is met at a unsatisfactory level, the information system is at, security people are not happy and the end user can be impacted in real life.

Let’s break all that down in a clear, simple, sec-by-step way. In the following paragraphs, I will talk about resource when referring the thing we are trying to protect. Most of the time, you can switch the words in your mind with data, information or service . In few cases, you can’t. I will try to use a more specific word for those cases. Let me know if you noticed me missing something.


1. Confidentiality – Keeping things only visible to right people

The purpose of Confidentiality is to make sure that only an authorized entity (a person, a machine, a computing program) can access a ressource. Note that when properly implemented, authorization is not granted in a timeless manner. An entity can have the authorization to access a resource depending on many criteria such as localisation, credentials used, time of the day, etc. Conditional Access in Microsoft Entra ID is a good illustration.

If your protection measures fail to address confidentiality, it means that an unauthorized entity can get access to your resource. Even if nothing happen while there is not enough confidentiality, it is still bad. Why ? Because you might not even be able to know for sure if an entity got access. It also have some repercussions such as not being compliant with regulation or losing trust from customers. Don’t wait the battle to learn how to fight.
Now, keep in mind that the failure leading to exposure of data can be accidental or malicious. For example, someone make a configuration mistake, and bad news: company confidential data is available on internet. It could also be the result of an attacker stealing your data, or someone looking your super admin password over your shoulder while your log in frol the bus to work.

Common causes of loss of confidentiality

  • Human error such as sending data to the wrong person or exposing confidential information in an public space.
  • Weak or reused passwords – and one factor authentication – allowing malicious actors to get access after guessing the credentials.
  • Malware such as keyloggers that can spy your activities and send back your secrets to the attackers.
  • Unencrypted data at rest or in transit making data readable for anyone with direct access to it (such as having the physical drive).
  • Misconfigured systems that make data unintentionally available for unauthorized entities.

Example of countermeasures to help maintain confidentiality

  • People training and Awareness – Minimize human mistakes by raising awareness among employees.
  • Classification of data – Define available levels of secrecy and label data accordingly. Which data should be Public? Confidential ? Secret?. This choice will also influence the security effort (features, mechanism, processes) deployed to maintain the desired level of confidentiality. Business priorities, legal requirements and people safety should drive.
  • Encryption – Make the data unreadable for all entities that don’t know the secret key. The encryption methodology should resist potential attack during long enough, i.e. the period while the data is valuable.
  • Access control – Appropriate mechanisms to grant access to authorized entities in the right conditions.
  • Endpoints Protection – Security tools and practices used to protect end-user devices (Endpoint Detection & Response, Antivirus, etc.).

2. Integrity – Making Sure Information Is Correct

Integrity ensures that data is reliable, correct and hasn’t been altered in unauthorized ways. This means that authorized changes are allowed, while unauthorized changes (malicious or accidental) are not permitted. The core objective of a properly implemented integrity control can be summarized in three major principles.

First, unauthorized subjects must not be able to make modification.

Second, authorized entities must not be able to make unauthorized modifications.

And finally, data must be valid, consistent and verifiable at any given point in time during its utilization lifecycle.
– Valid means that the data is factual and comply with a number of constraints. For exemple, imagine you are consulting data about the age of people. At some point, you see “Patient X, age : Coconut”. This data is not logically sound, thus is it not valid.
– Consistent means that all copies of the data are the same across all systems where users can legitimately access the data. If there is have a mobile application, a website and a bank ATM machine to access information about a bank account, it will be expected data to be consistent.
– Verifiable is about making sure that the system allow to trace back the origin of operations on the data (creation, modification, deletion).

Additional concepts you need to know

There are other important terms that you must have in mind when working on data integrity: Accuracy, Thruthfulness, Completeness and Comprehensiveness.

Accuracy – Being correct and precise. A simple example is an e-commerce system showing that a customer ordered 2 items, but due to a bug, the order is recorded as 20 items in the database. That is a big issue!

Truthfulness – Requires the data to be a true reflection of reality. It requires a form of honesty. Think about a person that fills out a form to get access to an event for people over 30 years old. That person declare having 31 years old, and the system stores it exactly as entered. The data is accurate, but not truthful if the person is actually 27 years old. In fact, it is not truthful is the person is anything but 31 years old. This principle can relates to fraud or abuse cases.

Completeness – The information have all necessary components or parts. Let me take the example of a shipping address of an online store. Assume that all related data is accurate and truthful, but somehow the city is not registered with the order in the database. For sure, this situation will create issues for the delivery of the package and have significant damage on the business operations if this is something that happen regularly. Just imagine that kind of issue on thousands of orders.

Comprehensiveness The information is complete in scope, it includes all needed elements. This one is more challenging because the needed elements are not necessary directly visible. Let’s stick to the example of a user that order a product. For regulatory reasons, technical reasons or to feed the incident response process, it might be needed to log additional informations such as the timestamp, the type of device used, the IP addresses in play, the description of the payment method. All that could be important to get to a complete view of the event. You know you missed something when you find yourself in a situation where you could say “I wish I knew that detail about this situation/event”.

Common causes of loss of integrity

  • Human error such as accidental deletion or modification of files.
  • Software bugs or glitches causing errors or corruptions.
  • Poor version control or lack of change tracking, making difficult or impossible to control the quality of the data.
  • Failed or improper backups and restorations. This remove ability to recover data if there is a problem.
  • SQL injection, Malware or data-manipulating attacks that alter or corrupt data .

Few countermeasures to help maintain integrity

  • People training and Awareness – Minimizing human mistakes by raising awareness among employees.
  • Access control – Having appropriate mechanisms to grant access to authorized entities and for allowed modifications.
  • Checksums, error-checking and signing mechanisms – Used to detect or correct corruption of data. When the problem is discovered, the system can request retransmission, discard the data or correct the error.
  • Version control systems (like Git) – Track data changes and allows traceability of modifications.
  • Backups testing and automation – Ensure the capability to recover in case of emergency.
  • Encryption – Besides making the data unreadable, encryption add a constraint for data modification. This is because it requires access to the secret key. To make a modification that seems legitimate, you must decrypt the data, modify it, and re-encrypt it with the legitimate key (i.e, the key expected by the system and its users).

3. Availability – Being timely and uninterrupted

Availability ensures that authorized users can access the data or systems when they need to. I will talk about service because most of the time, there is many resources involved in the chain of availability. A question to visualize would be : “How long will the service be unavailable to users over a given period ?”. You will see the terms uptime or downtime in this context. Uptime is the amount of time a resource is operational and accessible to users as intended. Otherwise, the resource is in the downtime period. Note that you will encounter situations where resources are up (short for the uptime clock is ticking), but the service – intended for entities – is down. This could be the result of problems such as connectivity issues. Availability is tied to a something specific; being precise is important when defining the objective.

Availability is usually expressed as a percentage that indicates how long a service is expected to be available during a period of time. As an example: 99.9% uptime means no more that 1 minute and 26 seconds of downtime in a day. Don’t even try to aim for 100% availability. That would be insanely expensive, if not impossible to do.

Some events such as people mistakes, attacks or natural disasters can make your service unavailable for a certain period of time. This will definitely have bad consequences on the business. The importance of the service will dictate how bad the consequences are. Generally, a corporate business representative that need the service should define a proper target for availability. The objective will imply the implementation of necessary tools, process, and architecture to make that happen. Don’t forget that it comes with a cost; do you really need this nice geo-redundant architecture for our use case?

In this paragraph, I preferred the use of service on purpose. This is because the this concept is rarely about a single device like a machine, a database or a application running on a server. Your server might be available, but if enough components on the path are not available, you don’t get any acceptable uptime. Availability is about everything that makes the service work in a desired manner. To get the overall availability of multiple components, you have to multiply the individual availabilities.

Total Availability = Availability(Component 1) x Availability(Component 2) x Availability(Component 3) ...

Additional concepts you should know

Availability concept comes with conditions and aspects such as :

  • Usability – The system or data must not only be available but also function properly so that users can interact with it effectively. If it is up but broken, it is not truly usable.
  • Accessibility – Authorized users must be able to reach and connect to the system without barriers, such as login issues, network failures, or permission errors.
  • Timeliness – Access must occur within an acceptable time frame, meaning the system responds quickly enough to meet business or operational needs. Delays can be just as harmful as outages.

Common causes of loss of Availability

  • Hardware failures due to broken servers or drives, power outages or disruptions, natural disasters.
  • Denial-of-Service (DoS) or Distributed Denial-of-Service (DDoS) attacks that aim to suffocate available ressources, and disturb normal operations.
  • Ransomware attacks locking systems or files, making normal operations difficult to resume.
  • Network congestion, impacting the capacity to deliver the service to users.
  • Overloaded systems (Compute power, Memory) due to unexpected traffic spikes.
  • Poor capacity planning or lack of redundancy that doesn’t allow the uptime requirement to be met in all conditions.
  • Maintenance mistakes or failed updates, requiring an extended intervention on the system.

Few countermeasures to help maintain Availability

  • People training and Awareness – Minimizing human mistakes by raising awareness among employees. I know, this measure keep coming, just like a mosquito. I want you to realize that people need proper permanent training).
  • Version control systems – Track data changes over time, and rollback whenever necessary.
  • Checksums and hashes – Verify file integrity to ensure no modification occurred during transfert.
  • System Redundancy – Having fallback system or equipments, such as RAID disks and failover servers/datacenters.
  • Power Redundancy – Install UPS (Uninterruptible Power Supply) devices and backup generators.
  • Anti DDoS protection and rate-limiting defenses – To limit impact of attackers that try to flood your systems with illegitimate requests.
  • Geographic redundancy – Host systems in multiple geographic locations to minimize impact of natural disasters.
  • Scalability by Design – Use appropriate features such as auto-scaling, load tested architecture, forecast supported architecture to develop environments that can grow with traffic and resist to spikes.

4. Why the CIA Triad Matters ?

The CIA Triad is used by security professionals to design, test, and improve the security of systems. It’s a fundamental concept. Whenever you’re making a design decision in cybersecurity, you are somehow trying to answer the question : What level of CIA is needed here ?
An important thing to keep in mind is that those 3 topics constitute a global balance. When you put efforts in one point, you can impact another one.

3 sharp examples about interaction between CIA triad

Confidentiality requires controls that can impact availability, making legitimate users unable to access the service. You know, it is like this employee that forgot his MFA device at home.

Integrity could need mechanisms such as version control or audit trails. This creates an additional burden on efforts required to maintain Confidentiality, because information about confidential data is most likely to be at least as confidential, and sometimes more.

Availability requires redundancy. Now, here you are with an architecture that duplicates data across regions, with backups and you must maintain Integrity across all those multiple copies.


5. Summary Table

PrincipleWhat It Means in plain EnglishWhy It Matters in Cybersecurity
COnly the right people can access the data.Protects against leaks, spying, and unauthorized access; maintain stakeholders trust in your systems.
IThe data is not altered in a non desired way.Prevents tampering or data loss. Avoir errors or abuses.
AThe data is accessible whenever needed.Ensures systems keep working for legitimate users. Otherwise the system/data is useless if no one can use it when needed.

What’s Next?

Now you know all major information about the CIA Triad 📚. If you feel that you need additional information about that topic, hit my mailbox.

In the next post, I will explore due diligence and due care. I had some trouble figure it out even if the description seems simple. I kept blending the two concepts for a while before figuring out how to effectively decide which actions goes into which concept.

Stay tuned and stay secure!