From abstract to concrete - the last mile of PoSecCo

11 Nov 2013 | Written by Serena Ponta - Contact Serena

The PoSecCo project proposes new methods and tools for configuring a service landscape in a way that security requirements are met. At design time, the Security Decision Support System (SDSS) can be used to refine high-level security requirements into low-level configuration settings enforcing them. The configuration settings generated by the SDSS are stored in the MoVE central repository. Such configurations are "abstract" in the sense that they use vendor/product independent syntax and formats. They are defined as instances of the Configuration meta-model in the form of sets of rules that depend on the control features and directly use the functionalities available at the target control. The last step for the enactment of such configurations is their deployment on the actual system.

The deployment of configurations and their maintenance is an activity expected from configuration management systems by actually setting the expected values on the actual systems. As in PoSecCo the expected configurations generated by the refinement process are abstract, the deployment process first needs to translate the vendor/product independent configuration settings into concrete ones following the syntax and semantics of the product in use in the landscape. This represents an additional task to be included in configuration management activities. Depending on the configuration management system used, the service provider would have to adjust the state-of-the-art processes to translate and deploy PoSecCo-generated configurations. In the PoSecCo proof of concept we selected the Puppet configuration management system and provided modules to translate abstract authorization configurations into concrete ones for J2EE Web applications, MySQL databases, and Ubuntu operating systems.

Puppet is a pretty well-known tool for managing configurations. This tool is meant for deploying configurations defined in artifacts, called manifests, by means of an easy-to-read, declarative language. Puppet comes with a Client/Server infrastructure with a master hosting all the manifests and agents that are the systems to be configured. The tool can be used in two ways: pull or push mode. The pull mode consists in a periodic call from the agent to the master, asking for updates of the deployed configurations. In the push mode the master sends selected configurations to agents. The latter mode is the one used by the PoSecCo deployment component.

Every time a new configuration is generated and stored in MoVE, the deployment component establishes the type of IT resource to which the configuration applies, e.g., J2EE Web application or MySQL, and starts the appropriate module for generating the concrete configuration. The module generates the concrete configuration in the form of a Puppet manifest file and stores it on the Puppet master system. Additionally, a check for the newly generated configuration is automatically built (in the form of an OVAL definition) and stored for enabling configuration validation at operational time (see 29 April post). Afterwards Puppet is triggered for pushing the concrete configuration to the specific host, e.g., to modify the deployment descriptor of a J2EE Web application or grant/remove access permissions over a database table.

For more information, we refer to the following sources of information:



Come and visit PoSecCo @ ICT2013 in Vilnius!

04 Nov 2013 | Written by Marc Soignet - Contact Marc

Has it already happened to you? You are sitting comfortably at home watching your favourite sports event and your favorite athlete managed to lead the entire race but now that he’s approaching the finishing line, his opponents are getting closer and closer... At this precise moment, the image gets pixelated and partially freezes! Everybody knows this annoying feeling and PoSecCo is here to make sure that this doesn’t happen anymore, at least not due to IT security misconfigurations!

Watch the video:


Come and get introduced to the functionalities brought about by PoSecCo at the ICT 2013 exhibition in Vilnius from 4 to 6 November 2013 in the Hall dedicated to Industry and Business for Tomorrow!

Tired of cost-intensive and error-prone IT security processes? Too many security-related business and legal requirements imposed on your IT system? PoSecCo translates all these into IT security configurations in an as-automatic-as-possible way and offers decision-support wherever human interaction is inevitable!

Our vision is to establish and maintain a consistent, transparent, sustainable and traceable link between high-level, business-driven requirements on one side and low-level technical configuration settings on the other side. This increases the security of services and – at the same time – decreases the operating costs enabling service providers to focus more on functional aspects of their services, and service consumers and auditors to get assurance that given security objectives are met.

Come and meet our project staff who will exemplify the use of the tool through our end-user’s Paralympics video streaming scenario, which is intuitively comprehensible even for a non-technical audience. You will learn how to use the different functionalities through a top-down approach following increasing complexity, from high-level requirements down to configurations.

Most experienced visitors will have the opportunity to try out our tool themselves!

Contact us!


Last Project Milestones

21 Oct 2013 | Written by Marc Soignet - Contact Marc

PoSecCo is entering the final stage of the project. The Integrated prototype was delivered last month and the project evaluation workshop was successfully carried out in the first week of October. The partner responsible for orchestrating and analysing the evaluation data, University of Innsbruck, is very confident with regards to the quality of the data acquired. The quantitative KPI assessment will be complemented by a qualitative analysis based on group interviews organised with the end-user staff members, right after the evaluation. All results will be available in a dedicated work document, in the final deliverable as well as in the PoSecCo book to be published early 2014. For more information on that, do not hesitate to contact us.

The last project tasks under work are:

  • the final PoSecCo event that will take place in the form of a demonstration booth at the ICT 2013 conference in Vilnius, from 6 to 8 November 2013.
  • The setting up of the PoSecCo online testing environment that will soon be available. More information on that in the coming weeks.
  • The writing of the PoSecCo book that will comprehensively present all the achievements and new concepts introduced during these three years, including practical examples that can be studied using the online test environment.

Finally, the entire consortium is looking forward to presenting the final outcomes of the project to the European Commission and the three appointed independent experts in mid-December in Madrid at the occasion of the final review of PoSecCo.


Stay tuned, more information will be coming in throughout the next weeks.



Security policies in PoSecCo - differences and commonalities with regard to other representations

07 Oct 2013 | Written by Simone Mutti - Contact Simone

The management of security requirements is a critical task that has the goal of avoiding possible information disclosures and to show compliance with respect to the many regulations promulgated by governments. A particular area of security requirement management is access control management, which focuses on defining a set of rules, called policies, that define for each user which actions he/she can do on the resources of the information system. In recent years researchers have developed a variety of policy languages for access control, in most cases with a direct integration with the languages and models of the modern Web scenario. These include industry standards such as XACML, but also academic efforts ranging from more practical implemented languages such as Ponder to Semantic Web based languages such as KAoS.

An abstract policy language without ties to a concrete model gives the designer considerable freedom, but it may also offer limited guidance and insufficient precision in the description. Conversely, an excessively concrete model may not have the flexibility needed to correctly express the policy of a given system or may force the omission of important aspects. In this view we have created the IT Security meta-model (D2.5). It has six major components, each representing a number of concepts: Principal, Security rule, Privilege, Authentication property, Resource, and Security domain. The Principal, Security rule, and Resource components are associated with relatively extensive schema. The remaining components instead present a limited number of entities. In order to highlight the potentiality of the IT Security meta-model we have performed an analysis over the de-facto standard used in modern access control models. Different criteria have been considered:

  1. Policy category: we considered the typology of policies that can be specified (e.g., access control languages, general purpose);
  2. Ontology availability: the existence of associated ontologies, has been investigated, in fact, the use of ontologies can bring several advantages for many tasks (e.g., conflict detection, knowledge disclosure);
  3. Formal model availability: the availability of a formal definition of a language has been considered because formal languages are more suitable for the complex verification tasks foreseen by PoSecCo;
  4. Conflict detection and harmonization models availability: since the project aims at generating conflict free policies, the availability of conflict detection and harmonization models has been considered;
  5. Refinement models availability: one of the main goals of PoSecCo is to refine business level policies to actual configurations. Therefore, the existence of refinement models or previous activities that proved the possibility of translating a policy between different layers of abstraction has been considered as an important added value;
  6. Related tools availability: tried to identify the presence of tools developed by third parties;
  7. Usability: tried to identify the usability of the model/tools (e.g., verbose, complex)


IT Security meta-model








Policy Category

authorization, authentication



authorization, obligation


authorization, obligation


authorization, obligation

Ontology Availability

available and completely aligned with the formal model

XACML attributes defined as RDF properties

no explicitly defined ontology available

no explicitly defined ontology available

no explicitly defined ontology available

no explicitly defined ontology available

tools to translate CIM to OWL are available

KPO (KAoS Policy Ontology) which covers the entire KAoS policy language

Formal Model Availability

a formal model is available and described in D2.5

no official formal semantics of the language. However, XAML policies can be used directly by mathematical tools

logic-based language, with formal semantics based on Datalog

logic-based languages, using TLA (Temporal Logic of Actions) and Z language

no formal model, however many concepts can be easily mapped to access control models

PonderTalk, a Smalltalk derived language

represented using an object-oriented paradigm

composed by the KPO, that can be extended to include new concepts

Conflict Detection and Harmonization Models Availability

conflict detection available for inconsistency, redundancy and Separation of Duty

conflicts cannot arise in the evaluation of XACML policies. However, analysis tools show inappropriate conflict resolution

conflict detection as in failed queries in Datalog

no conflict detection study available for UCON. OSL proposes conflict detection based on model checking

no support for conflict and anomaly detection

domain nesting conflict resolution

no conflict detection

conflict detection at specification time

Refinement Models Availability

translation to Abstract Configuration for Operating System, DBMS and Web containers (e.g., Tomcat)

no refinement methodology

translation possible into Datalog with constraints

OSL defined refinement model to formal abstract enforcement mechanisms

no refinement methodology

no refinement methodology

no refinement methodology

no refinement methodology, but it can be directly used to configure KAoS enabled agent service

Related Tools Availability

IT Policy Tool

many available tools to read, manipulate and validate XACML policies

see tools for Datalog with constraints

no available tools

a compiler, editor and a management toolkit are available

Ponder2 provides an API, a PonderTalk compiler and a shell

many tools to read, manipulate, validate the PCIM

KPAT - KAos Policy Administration Tool


defining the policies requires good knowledge of the model, ITPolicy Tool can be used to simplify policy editing

very verbose format

very close to natural language

defining the policies requires good knowledge of the formal underpinning logic

human readable, despite verbose

human readable

very verbose format

KPAT can be used to simplify policy editing



Opportunities and limitations regarding the use of automated audit solutions: An auditor’s perspective (2/2)

30 Sep 2013 | Written by Delphine Zberro - Contact Delphine

This post is the second part of last week's post.

Reasonable assurance for data governance and IT general controls: A pre-requisite.

However the implementation and the impact of an automated auditing solution, from an entity stand point requires an appropriate level of readiness of the IT and internal audit functions in order to collect and analyze adequately the data extracted and the report generated by the automated audit solutions.

These systems rely upon the information provided by the audited entity. If the extracted system data contains errors, the decision over the compliance of a control could be erroneous.

If an audit relies completely or partly on an automated audit solution, it constitutes a crucial aspect of the audit process as it is part of the core of the audit activities. As a result, it is essential to obtain a reasonable assurance that the decisions based on such tools are not misled as some compliance failures may not be detected or compliant items may be observed as not compliant.

This issue raises the question of the key part of automated auditing: the information provided by the entity (IPE). It is mandatory, in order to efficiently perform an automated audit, to have a reasonable assurance of the accuracy and the completeness of the information provided by the audited entity. At minimum, it is required to obtain a complete audit trail of the system-generated information, to ensure the implementation of IT general controls and also the implementation of a data governance framework. Furthermore, it is necessary to ensure that those automated audit activities are compliant with compliance rules and regulations since they must cover the same requirements and generated reports can be used as audit evidences.

As a result, the access security of automated audit solutions is one of the main key controls to look at when performing an audit with such tools. As any key systems in an organization, powerful user access profiles should be restricted to the appropriate users as it extracts system configuration and sensitive transaction data. Furthermore, it is important to provide a secure and appropriate access to these tools for the internal and external auditors.

It is also recommended to investigate on the change management over these tools as some changes implemented could reverse the tool configuration to its initial stage or alter the rules to analyze the data resulting in erroneous reports. Usage of the reports for external audit purposes should also provide a reasonable assurance on the completeness and accuracy of information provided by implementing and documenting an audit trail.

Finally, not only from an auditor point of view, the exploitation success from an audited entity resides in the implementation of the data governance framework (i.e. “single point of truth”) with adequate sponsorship to target “reasonable” return on investment…


Opportunities and limitations regarding the use of automated audit solutions: An auditor’s perspective (1/2)

23 Sep 2013 | Written by Delphine Zberro - Contact Delphine

The need of automated audit solutions has emerged from the growing number of compliance rules and regulations requiring the implementation of controls over information systems and the requirements to provide continuous controls over those systems. Yet, while these automated solutions provide great opportunities, they also require the implementation of adequate safeguards to ensure their exploitation.

An opportunity for continuous policies monitoring and manual tasks amount reduction

Automated audit activities can be defined as a method used by auditors to perform audit-related activities with automated tools.

The complexity of regulatory environments and the due diligences ruling audit activities create several constraints for audited entities as well as for external auditors. As a result, those generate costly and time-consuming tasks usually performed manually. These methods aim at automatically obtaining exceptions or anomalies and therefore non-compliance with controls and regulations by pre-identifying processes and transactions to be audited, defining a set of control rules, determining the types of tests to be performed and the data to be analyzed.

For instance, an ISAE 3402 audit (global assurance standard for reporting on controls performed by an external service provider) requires external auditors to certify on the existence of the entity services and controls. Therefore, it operationally requires a rigorous evidence collecting process.

The main benefits of automated auditing primarily reside in the continuous control monitoring and in the cost optimization of audit activities. It allows auditors to perform more cyclical or episodic reviews and controls, to obtain the analysis of trends in these controls. For example, it is possible for auditors to achieve a timelier and less costly compliance with policies, procedures and regulations, by reducing the time consuming tasks usually performed manually and eventually reducing the audit testing period.

As such, automated audit solutions also constitute a great resource for key officials of an entity, such as the Compliance Manager, the Risk Manager, or the Chief Information Security Officer, to ensure continuous monitoring of policies, procedures and processes. These solutions can proactively assist Control functions of an entity in performing quality control, internal control, and internal audit activities.

Please check back next week for the second part of this post!


The PoSecCo architecture at a glance

16 Sep 2013 | Written by Beatriz Gallego-Nicasio Crespo - Contact Beatriz

The PoSecCo architecture is based on the central idea of creating and maintaining a policy chain that links high-level, abstract and declarative Business Policies on one side and low-level, imperative, and technical Security Configuration settings on the other side.


High level overview of the PoSecCo architecture

The figure above depicts a high level view of the PoSecCo architecture (in light gray), interacting with the service provider environment. Some elements of the target system (e.g. logs, UMS), the content of the CMDB, as well as the output of some business modeling tools (e.g. BPM, EAM) are taken as input (1a, 1c, 1b respectively) by PoSecCo to build the different layers of the Functional System model that represents the service provider’s complete landscape. The figure also sketches the process of deployment of the security configurations into the target system by means of a CMS (3a, 3b).

Two types of PoSecCo components shall be distinguished:

  • PoSecCo Infrastructure components: by interfacing the IT service management tools, allow to create, maintain and provide access to the Functional System model, which represents the functional aspects of a service provider’s business and infrastructure.

  • PoSecCo Application components: construct and leverage the policy chain on the basis of this Functional System model, and are used by different stakeholders involved in policy and configuration management. The policy chain established ensures that work performed on one abstraction layer can be related to dependent artifacts on other layers. As such, the policy chain and its respective applications guarantee a consistent model on all layers and a holistic view on security and compliance.

The PoSecCo architecture follows an MVC-like paradigm. The Model, managed with the MoVE model repository, enables the persistency of unified Functional and Security models. All PoSecCo models will be stored in MoVE. The View is provided as user interfaces in the presentation layer. The User Interface (UI) is detached from the business logic so as to support the harmonization of the UI of the different prototypes. The Controller, held within the application/infrastructure components, accesses the Model and the View and is the logical decision point. The Controller retrieves, builds, or modifies a model based on some triggered actions and then, decides which view is the most appropriate via some internal logic.

To enable the interaction between the different PoSecCo components (infrastructure or applications) there is a communication protocol, HTTP-based to ensure interoperability, which defines different types of communication:

  • Direct communication between components (by means of a pre-defined component’s API),

  • Indirect communication, for CRUD operations on single model element or for a commit of a complete model (through the MoVE model repository API)

  • Event bus communication, which allows components to register and to consume events related to model updates.

The event bus communication realizes the message-based architectural pattern and the pub/sub model (through message brokerage or message oriented middleware (MOM)), as well as standard protocols that leverages the distribution of events/notifications in heterogeneous complex systems. The central model repository MoVE uses the event bus communication to notify any change in the state of a model to the interested PoSecCo components. This way, the top-down approach (used to build the policy chain) and the bottom-up approach (used to validate security configurations and assess discrepancies to re-deploy configurations) can be performed with the collaboration of various focal prototypes working in a synchronized manner.

The architecture approach adopted has permitted a seamless integration of the different focal prototypes developed in the PoSecCo project. The result is an integrated prototype that will be evaluated by project end-users, in the first week of October 2013.

For More information: D1.3 - Concept and Architecture of the overall solution


Policy Chain Management – Change-Driven Who, What and When

17 Jun 2013 | Written by Ruth Breu - Contact Ruth

Since security is a negative requirement (a corporation or system is secure if it has no, or in practice, a set of known, accepted vulnerabilities) a core challenge to the management of complex policy chains is the coordination of tasks and responsibilities of numerous stakeholders.  These stakeholders range from the executive level to system administration level, within or across the corporation. Therefore, important components within policy management are data integration and workflow support.

In order to tackle these two aspects the PoSecCo consortium, after a careful evaluation of different options, has decided to use the MoVE model evolution engine as the central hub for data and workflow integration.

Data integration: Data integration in the realm of security policies first of all means to support traceability between security policies at different levels of abstraction. As an example, both CSO and auditor are interested in tracking business security policies down to the technical level.

Since security requirements in most cases depend on functional elements, e.g. considering security requirements of a business process or of a technical infrastructure element like a server, traceability between security requirements and the functional architecture is an additional cornerstone. MoVE supports such traceability issues through a rigorous model-based approach.

MoVE integrates model-based data generated in a heterogeneous tool environment through interlinked model elements adhering to a Common System Model in a central model repository. Local data sources (e.g. policies generated in the PoSecCo tool environment) can be connected with the central model repository through an advanced CRUD interface.

Workflow Integration: Workflow integration is concerned with the coordination of the manifold tasks and responsibilities around the security policies. These range from manual tasks like checking the fulfillment of a policy and refinement of a policy to the automated checking of consistency properties. A crucial factor within such task coordination is the systematic handling of changes. Changes, whether at requirements, organizational or technical level, are a recurring source of incidents. MoVE supports the coordination of stakeholder tasks and change workflows through a state-based workflow concept intimately connected with the model elements in the central repository.

Each model element in the repository can be attached with states and state transitions. Policies may be attached with states like fulfilled or not fulfilled. State transitions may be attached with automated service calls (e.g. initiate re-evaluation of updated model parts) or may create manual tasks in a task management system. In the PoSecCo use cases we have demonstrated that this concept is appropriate to model workflows within tools (e.g. within the CoSeRMaS security requirements tool) and across tools (e.g.  non-compliance detected at the policy level revoking requirement fulfillment in CoSeRMaS).

For More information:



Configuration Management Databases in PoSecCo

10 Jun 2013 | Written by Annett Laube-Rosenpflanzer - Contact Annett

PoSecCo relies on two ITIL elements to support configuration management: Configuration Management Database (CMDB) and Configuration Management System (CMS). Configuration Items (CIs, typically hard and software components) are recorded in a CMDB and managed by the CMS by means of a series of standard processes. The data contained in the CMDB are the main source to create and maintain the Infrastructure Layer of the PoSecCo Functional Systems Model. The PoSecCo Functional System Model provides the data basis of all PoSecCo tools and has to be built for the entire landscape of the service provider for which the PoSecCo policy chain should be constructed.

PoSecCo assumes that the service providers’ CMDBs support:

 - the WBEM standard to collect data from the landscape components and to provide information to client applications


 - uses a model based on the Common Information Model (CIM).

The PoSecCo CMDBExtractor queries all necessary landscape information from the connected WBEM-enabled CMDBs, converts them from the CIM based model into the PoSecCo Functional System Model and stores them in the PoSecCo model repository. In a second step, the created models have to be verified and completed in a semi-automated manner.

In praxis, only a few of the available CMDBs are using the standards WBEM and CIM (on overview can be found in the PoSecCo Deliverable D1.2 – Reference Architecture). Commercial CMDBs often implement a simple data model, sometimes reusing concepts from CIM. These CMDB are used to maintain the inventory and contain mainly a list of the hardware and software components (licenses) purchased. Information about which software components are running on which systems and how the systems are integrated in the networks, which is essential for the PoSecCo tools, are normally not included.

In PoSecCo, we use an OpenPegasus CMDB in our testbeds. OpenPegasus is one of the Open Source CMDBs that support the WBEM and CIM standard. The OpenPegasus CIM server contains a CIM object manager (CIMOM) and a CIM repository. The CIM Object Manager Repository only contains static data about Configuration Items, normally entered manually with different tools. More interesting is the possibility to retrieve dynamically information about the landscape. This is possible with the help of WBEM providers that monitor the configuration items and provide a snapshot of current state. Unfortunately, only a few providers are available, mostly for hardware components, like computers or firewalls. In PoSecCo, where we focus more on the software components, only the WBEM provider for the operating systems could be used. For all other components, we had to maintain the information manually in the CMDB.

In general, it is not very complicated to write WBEM providers for the components that should be monitored by the CMDB. But as each provider has to be registered in the CIMOM, it makes only sense in a very stable environment. In a quite often changing environment like webservers, where updates of subcomponent (Java VM, security modules, etc.) and of the deployed web applications and web services are daily business, the use of providers does not really reduce the manual effort. An agent based system that scans regularly the monitored systems, seems more promising for future work.

The reader is welcomed to read to through the following documents in order to get additional information about the above-mentioned concepts:


The PoSecCo Security Decision Support System

03 Jun 2013 | Written by Antonio Lioy - Contact Antonio

The PoSecCo project proposes an innovative approach to security design and management: it aims at establishing a traceable and sustainable link between high-level requirements and low-level configurations, by means of the Security Decision Support System (SDSS). This link is created at design time, when performing the policy refinement that establishes the policy chain (see 1 Apr blog post).

The last link in the chain is the one between the IT Security Policies and the Security Configurations. IT Security Policies are specified using a formal meta-model, that is a state-of-the-art access control policy specification format. IT Security Policies allow the specification of security requirements for authentication, authorization, and data protection policies. They can be expressed in natural language as:

Customer#1 is allowed to reach the eInvoicing service


The communication between the eInvoicing service frontend and the Database must be protected

In PoSecCo, the refinement of IT Security Policies is performed according to two classes of security controls. First, the SDSS evaluates the possibility of using endpoint security controls, e.g. to enforce authorization policies (access control) at operating system or database level, and authentication mechanisms implemented directly at the endpoints. Then, a topology-dependent refinement step is performed for policies whose enforcement may be done also using non-endpoint controls (e.g. network firewalls or security gateways). In the latter case, the IT Security Policies are translated to a different format, named Logical Association, prone to topology-dependent analysis. This is one of the aspects where the SDSS exhibits a significant innovation, with the integration between traditional access control (as supported by operating systems, databases, web servers and application servers) and network access controls.

At the IT Security Policy level, only the endpoints are considered, while the structure of the IT system is simply ignored. However, when arriving at the Security Configurations this assumption is no longer true. An IT system is a network composed by many network nodes, each one having specific capabilities (i.e. the security controls) that can be enabled, taken individually or in combination, to enforce the policy. Due to the large availability of security controls, the infrastructure layer offers numerous possibilities to enforce the policy. For instance, data protection can be enforced by establishing secure channels with different technologies (e.g. WS-Security, SSL/TLS and IPsec). Additionally, the previous techniques can be used end-to-end or, in case of site-to-site policies, using the gateways (e.g. OpenVPN or IPsec in tunnel mode). Usually, the bigger and more complex the network, the more the ways to enforce a policy.

This large set of choices is certainly an advantage for security and fault tolerance, however it can create confusion and make the administrators' decision difficult. When many implementations of the policy are possible, our approach uses mathematical optimization to select the "best" implementation of the IT Security Policy. The problem of selecting and configuring the enforcement points is mapped to an optimization problem that, given a description of the IT system and a policy, has to find a set of optimal Security Configurations enforcing the policy. The actual solution is found using an off-the-shelf optimization tool; we experimented with both an open source product (lp_solve) and a commercial one (CPLEX).

With this approach the user has only to select the optimization profile to use to define the concept of "best" (see below picture).




The SDSS will then perform the rest, without asking for any other contribution from the user:

- Generation of the possible implementations: The policy is processed to identify the available security controls in the target network that, individually or in combination, can enforce it.

- Implementation rating: Each generated implementation is evaluated according to a different set of metrics: performance (individual components and overall), security (risk analysis, security controls reputation), and costs (deployment, management). This entails the existence of a company-approved risk analysis and reputation analysis model.

- Model generation and solution: In this phase, we formally model the problem of choosing among different implementations as an Integer Linear Programming problem (ILP), thus we can use standard solvers to compute the optimal set of implementations (an exampleof a generated optimization models is in left figure below) . This approach is easily extensible to different optimization criteria, since they can be represented as new target functions. For instance, the different implementations may be selected to minimize the risk, the maintenance or deployment costs, maximizing the performance, or a balanced mix of these objectives (that can be customized by the user). The optimization is a global one as all the IT policies processed by the Infrastructure Configuration are considered at the same time and not one-by-one. The SDSS permits the visualization of the selected and discarded implementations, as well as the rating (Wwights) and criteria for rejection (see middle figure below).

- Security configuration generation: Once the mathematical solver selected the "best" combination of security controls to enforce the policy, the final step is to generate the configurations: every IT Security policy is interpreted and adapted to fit to the category of security control to configure (e.g. filtering device, channel protection) and it contributes to the configurations with one or many configuration rules (see right figure below).






In summary, with the SDSS, PoSecCo offers to the security manager a tool that evaluates the available options for implementing the security policy, automatically grades them according according to user-specified criteria, and suggests the best solution to satisfy the requirements.

The user is welcome to read the SDSS documentation and try out the tool, as well reading the various deliverables that document the meta-models and the refinement procedure: 




Keeping the cost of security Low

28 May 2013 | Written by Günter Karjoth - Contact Günter

Have you ever wanted not only to configure your system securely but also at the least cost? Unfortunately, the nature of security makes it hard to measure and therefore difficult to both quantify and evaluate. To be able to perform an analytical and more exact description of security, quantitative security measures are needed. Furthermore, security metrics should be meaningful. Yet, metrics are often too simplistic or are aggregated according to nice theoretical models but without empirical basis or business sense.

PoSecCo has studied the applicability of security metrics to the different types of decisions CIOs and other decision makers may encounter within policy and configuration management. As there is no single metric covering all security controls, we focused on a decision framework for identity and access management. Having a model of the Identity and Access Management (IAM) systems deployed in the organization, it supports the involved stakeholders to express and explore their subjective concerns. A metric should reflect the specific needs for Role-Based Access Control (RBAC), widely used in enterprise security and identity management products. Our investigations have been on role modeling pertaining to analysis, design, management, and maintenance.

Different factors influence the complexity and thus the quality of RBAC configurations and their cost of administration. Among the data available to the organization, it is possible to find information that either directly influences the required system administration effort (e.g., number of roles, number of role-user relationships to be administered, etc.) or information that helps role engineers assign business meaning to roles (e.g., business processes, organization structure, etc.). Once an organization has identified the relevant data for access control purposes, this data can be “translated” into cost elements and then combined into a cost function.  We developed a technique for modeling complex systems when analytical models of a situation do not exist. Deploying a multi-dimensional database technology, it supports the identification of ineffective access control mechanisms. In particular, we have explored OLAP visualization of access-control-related information as a source of intuitions and hypotheses and as a way of communicating model data. Using OLAP gives the opportunity to discover previously undiscerned relationships between data items establishing a sound role model.

Balancing protection and empowerment is a central problem when specifying authorizations. The principle of least privilege, the classical approach to balancing these two conflicting objectives, says that users shall only be authorized to execute the tasks necessary to complete their job. However, business processes that require the execution of multiple tasks by different users can have multiple authorization configurations (representing different authorization policies) that satisfy least privilege. Furthermore, the choice of an authorization configuration may be influenced by the cost associated with the respective administrative change. We model the tasks that users must execute as workflows, and the risk and cost associated with authorization configurations and their administration. We then formulate the balancing of empowerment and protection as an optimization problem: finding a cost-minimizing authorization configuration that enables a successful workflow execution.

In summary, the PoSecCo Decision Types and Framework provides cost/benefit models and techniques to support IT managers when making decisions between different configuration options as well as different ways of managing policies and configurations. Having a business focus, this work complements the technical research work packages of PoSecCo. By providing new ways to measure and optimize the cost-benefit ratio, it provide the basis for an economic justification of investments in infrastructures facilitating policy and configuration management.

For more information we refer to the following sources of information:


Vulnerability Assessment and remediation

20 May 2013 | Written by Kreshnik Musaraj- Contact Kreshnik

To guarantee compliance with high-level security policies in PoSecCo, especially in case of change in the landscape, we also rely on vulnerability assessment to provide adapted remediation proposals. The prototype we have implemented intends firstly to help security stakeholders to understand the risk they are facing, by showing the existing vulnerabilities that can be exploited by a malicious user to violate the security policies in place and, secondly, it gives to the defenders the opportunity to select the best counter-measures to be deployed, the ones that offer the best balance between deployment costs and improvement in the compliance level.

The first step to carry out the vulnerability assessment is to build an attack graph based on the current state of the information system. An attack graph contains all the potential series of vulnerabilities exploitation that can be used by an attacker to compromise IT ressources. It takes into consideration the configurations conditions required by an attacker to conduct his attack. The attack graph is generated using the topological information contained in the PoSecCo models (extracted from MoVE), a vulnerability database and an attack graph engine provided by the Fi-Ware Security Monitoring Generic Enabler.



As an attack graph is often too complicated, it is necessary to extract relevant paths from this graph to support the work of security architects and operators.

Extracting an attack path depends on the definition of such a path. Intuitively, an attack path is a sub-graph of the entire attack graph that provides the set of dependencies required for reaching a given target in the attack graph. Such examples are provided in the figure below. The extraction of attack paths is a preliminary step in the remediation process, which is followed by the scoring computations of each attack path. The score of an attack path corresponds to the risk value of attaining the target, combined with the impact value that the compromised target has on the related processes. It is important to note that the score is related to the attack path as a whole. This means that the same target, depending on the attack path that allows reaching it, will be associated to potentially different attack path scores.



When a security operator chooses an attack path, remediations that can cut this path (and prevent the intrusion) are calculated. These remediations could be, for example, the addition of a firewall rule or the deployment of a patch that correct a vulnerability on a machine. Several remediations may successfully reduce the risk, and in order to help the user to choose the most appropriate we compute an operational cost (deployment cost of the remediation) and rank then accordingly.




Finally, to understand the impact of the chosen remediation, prior to actually deploy it, we use the MoVE capacity to create branches of the models on-the-fly and we simulate the deployment in this duplicated MoVE database. A new attack graph is built and compared to the previous one, thanks to a global scoring function.

The global scoring function provides an aggregative assessment of the overall risk related to a given attack graph. While the score of an attack path provides the risk and impact for reaching a target vertex, the global score provides a way to numerically evaluate the global risk presented by an attack graph.




Comparing benefits of using the PoSecCo toolset to the costs of its deployment

13 May 2013 | Written by Lukas Demetz - Contact Lukas

The PoSecCo project comprises project activities analyzing the economic viability of the developed toolset. That is, we compare benefits of using the PoSecCo toolset to the costs of toolset’s deployment. For this purpose, we are currently conducting an online survey focusing on security and compliance processes. In more detail, in this study we are interested in performance dimensions and in the impact of certain cost drivers on the costs of security and compliance processes.

In doing so, the survey consists of two main blocks. The first block focuses on performance dimensions of security and compliance processes. In this block, we use the point allocation method so that participants can allocate 100 points to the four dimensions execution time, resource consumption, correctness of output, and coverage of input. Such an allocation is done in general and focusing on the six activities selecting control objectives, creating the control design, verifying the control implementation, verifying the control design, checking supplier, and visualizing security models. The second block focuses on cost drivers of security and compliance processes. Here, we ask respondents to indicate the impact of given cost drivers on the costs of security and compliance processes using a seven-point Likert scale.

In this survey, we rely on the experience of professionals in the area of security and compliance management. If you are working for a service provider or an (IT) auditor and are in charge of security and compliance processes, we kindly invite you to participate in the survey.

You can access the survey here. The survey is in English and takes about 15 minutes to complete. Certainly, the data is stored anonymized and is only analyzed for the purpose of this survey. Your participation is highly appreciated. Without the support of professionals, it would not be possible to conduct research like this. In case you know any other person who qualifies as a potential respondent, please feel free to forward the survey.

Once completed, we are happy to share the results with all interested participants and readers of this blog.

The results of this survey will provide valuable input to:

  • D5.4 – Analysis of economic viability
  • D1.7 – Final project evaluation


PoSecCo Collaboration with FI-Ware Project

06 May 2013 | Written by Olivier Bettan - Contact Olivier

One main objective of PoSecCo is to promote collaboration activities with relevant external projects and communities. The most advanced synergy were exploited together with the Fi-Ware project.

FI-WARE delivers a novel service infrastructure built upon elements (called Generic Enablers – GEs) which offer reusable and commonly shared functions making it easier to develop Future Internet Applications in various sectors.

The FI-WARE project develops open specifications of FI-WARE GEs, together with a reference implementation of each GE for testing some of which will be submitted for standardization. FI-WARE aims to draw upon results already achieved through earlier research projects to further develop and integrate them; this shared interest for enjoying the benefits of collaboration naturally led to the encounter between PoSecCo and Fi-Ware.

Regarding PoSecCo objectives, the most promising synergy was found within the Security Monitoring GE (see Fi-Ware Wiki for more details). Three main capacities of PoSecCo have allowed Fi-Ware Security Monitoring GE to evolve.

The following figure shows the FI-Ware architecture before PoSecCo elements were incorporated.



- The PoSecCo Meta Models (Functional System Meta Model and Security Model) gather all the input required by the Attack Path Engine within Fi-Ware and thus resulted in the design of a new component responsible of the extraction of “Topological data” from various sources.

 - The PoSecCo focus on Security Requirements was resulted in specific implementation for the evaluation of the criticity of the Attack Paths discovered by the Fi-Ware MulVAL Attack Path Engine asset.

 - The Simulation capacity of the PoSecCo Vulnerability assessment and remediation tool, along with the ability to maintain multiple instances of PoSecCo models led to the creation of such a capacity within the Security Monitoring GE.


These mutual improvements provided the following architecture:



Benefits for PoSecCo were comparable: being compliant with Fi-Ware gives the project the opportunity to gain access to the existing and future GEs and assets that could speed-up the project's development:


The MulVAL Attack Engine Path asset was included as a main component of the PoSecCo Vulnerability assessment and Remediation prototype.

- This allowed access to an up-to-date and widely accepted vulnerability database (the NVD Vulnerability database)

- Remediation strategies were made available through the “Remediation App” asset and more easily extended to PoSecCo purposes.


Configuration validation – Ensuring control effectiveness

29 Apr 2013 | Written by Serena Ponta - Contact Serena

Configuration validation establishes whether actual configuration values of software components comply with desired ones.  In the PoSecCo approach, desired configurations are defined within the policy chain together with the link towards the higher-level policies they are meant to enforce. As long as configuration settings can be altered manually, configuration discrepancies between desired and actual value may exist at operational-time, due to accidental changes or intentional attacks. Configuration validation is thus critical to ensure a secure and compliant configuration at any point in time.

Configuration validation is performed by means of checklists and checks specified with a declarative language based on the Security Content Automation Protocol (SCAP).  SCAP is provided by the National Institute of Standards and Technology (NIST) and is an on-going effort to provide standards for security automation and to foster the exchange of security knowledge among stakeholders. In particular, we rely on XCCDF for the definition of structured checklists, and OVAL for the specification of security checks to detect discrepancies. It is important to note that we extended OVAL, currently focusing on single hosts and operating systems, to support configuration checks for diverse, distributed software components.

The configuration validation tool is composed by an audit interface and a configuration validation back-end.  The audit interface can be accessed by different stakeholders, e.g., internal/external auditors and security administrators, and supports various auditing activities.

The following video exemplifies the validation of configurations settings stemming from PCI DSS and authorization requirements for business services. 


Checklist creation and export. The PoSecCo policy chain can be used to create a XCCDF checklist. The checklist can be structured according to customizable criteria. In the video it follows the policy chain, i.e., it involves a set of security requirements and all the linked elements of the chain down to the configuration settings enforcing them. Such checklist can be used by external auditors as base for the creation of audit plans. Optionally the checklist can be exported to be enriched and customized.


Checklist enrichment with external editors. The automatically created checklist can be externally enriched by using existing editors for the SCAP standard, e.g., eSCAPe. The customization may span from metadata, e.g., to include additional comments and descriptions of the audit plan, to the addiction of configuration checks. In the video checks for the OWASP and SANS recommendations are added. The resulting checklist can then be imported back in the audit interface and run.


Checklist import and execution.  The execution of the checklist is performed by means of checklist and check interpreters retrieving configuration settings in a distributed environment, evaluating the checks, and establishing if a discrepancy exists. It is worth noticing that we separate the check logic from the configuration settings’ collection mechanisms. As an example, configurations of web applications are collected through the Java Management eXtention (JMX) interface offered by Tomcat.


Discrepancies assessment. In case a discrepancy is detected, compliance to high-level requirements is not ensured any longer. This situation holds until a security administrator establishes whether the discrepancy is indeed a misconfiguration. The PoSecCo tool supports the security administrator by providing additional information about the discrepancy. In the video an assessment module provides detailed information about the differences in the assignment of permissions to users and roles between the desired and actual configurations of a J2EE web application. Finally the tool allows to manually override the result.


By assessing the compliance of information systems’ configuration to high-level requirements, the PoSecCo configuration validation supports and automates auditing activities and provides greater assurance to various stakeholders, like auditors, about the effectiveness of security controls in the operational landscape.


For more information, we refer to the following sources of information:

• D4.5 - Final Version of a Configuration Validation Language

• D4.8 - Prototype: Standardized Audit Interface


IT Policy Harmonization: Identification of conflicts in security policies

15 Apr 2013 | Written by Stefano Paraboschi - Contact Stefano

The IT Policy has a crucial role in the PoSecCo approach. Its responsibility is to build a strong connection between, toward the upper level, the textual and natural language representation that is the outcome of requirements analysis and, in the opposite direction, the logical description of the security configuration of the system. The representation of the IT Policy is on one hand formal, with the ability to support the automatic verification of several properties, and on the other hand relatively abstract, leading to the identification of properties that are independent of the specific implementation details that characterize each technological solution. The "IT Policy Harmonization" is the process that verifies the consistency of the policy. This process invokes the services of three modules. The modules rely on the use of Semantic Web technologies: the IT Policy is represented as an OWL ontology and the reasoning services are expressed in Semantic Web tools like OWL-DL, SWRL, and SPARQL-DL.


We briefly describe each of the three modules.


- Modality conflicts: this module is responsible for the identification of conflicts between rules of opposite signs that can be applied to the same requests. This occurs when the policy at the same time contains positive authorizations that grant to a user the ability of making a specific access request and negative authorizations that forbid the same access request. The identification of the conflicts takes into account the conflict resolution option specified in the IT Policy.  The results of the analysis is the set of conflicts that are not solved by the conflict resolution algorithm. The figure shows the interface used for the description of the identified conflicts. There is an interface showing an explanation in term of the internal OWL constructs and another interface that expresses the results in a format that is near to the IT Policy representation familiar to the user.



- Redundancy detection: this module detects the presence of authorization rules that are implied by other authorizations and can then be removed without an impact on the semantics of the policy. The goal of this control is both to identify potential sources of redundancy and to notify the security designer and administrator of possible overlaps between independent policies, as these situations may be a signal of wider anomalies. The figure shows the interface offered by the IT Policy Tool for the notification of the redundancies in the IT Policy format. The reasoning service that detects the redundancy is implemented using SWRL rules and SPARQL-DL queries.



- Separation of Duty violations: this module identifies violations to Separation of Duty (SoD) constraints, which are expressed in the PoSecCo IT Policy as negative role authorizations, which are interpreted as incompatibilities among distinct roles. The module, using OWL-DL reasoning, verifies if the authorizations permit that conflicting roles can be enacted by the same user or role. The analysis is restricted to "static" SoD constraints, as the IT Policy model does not currently support the representation of "dynamic" SoD.



For more information, we refer to the following sources of information:


Business Policies – Corporate Monitoring of Security Requirements

08 Apr 2013 | Written by Ruth Breu (University of Innsbruck) - Contact Ruth

A core task of the PoSecCo Business Layer is to conceptualize information from manifold sources – ranging from standards, legal regulations, service level agreements from customers to information and events from lower levels of the policy chain. A tool at this level of abstraction has the goal to provide executive stakeholders with a cockpit to monitor corporate security requirements. With CoSeRMaS we developed an innovative tool which provides a fundamental solution to a number of challenges.


- Dependency on functional concepts: Security requirements in CoSeRMaS are bound to functional concepts – e.g., a business process, an institution or a business object. On the other side CoSeRMaS is fully generic with respect to the meta model of the business process and functional layer, supporting continuous information exchange with external tools (e.g. an Enterprise Architecture Management tool). That way cost intensive maintenance of the business model can be avoided.

- Collaborative management of security requirements: Capturing security requirements becomes more and more a highly collaborative process. CoSeRMaS integrates techniques to coordinate and support stakeholders, e.g. through a collaborative refinement process and a workflow-aware task and message system. The collaboration may involve external stakeholders, like service providers or auditors.

- Monitoring requirements fulfillment: security requirements go through two different phases  – the elicitation and operation phase. After collaboratively eliciting the necessary security requirements,  CoSeRMaS provides support for a  flexible fulfillment model, where e.g. the fulfillment of a super-ordinate requirement may depend on the fulfillment of the sub-ordinate requirements (and/or some automated check).

- Change Handling: Numerous security incidents had their root cause in some change (see article on CoSeRMaS provides rigorous change handling based on the MoVE Model Evolution Engine ( model element (e.g. a security requirement, a functional model element) can be attached with a state machine coordinating change propagation and change handling. The interface for CoSeRMaS users with respect to this change handling mechanism are states of the security requirements and the task and message system.

In addition to these well-conceptualized mechanisms CoSeRMaS is provided with a novel template system to support standards and best practices and a variety of analysis and reporting features.

For more information we refer to the following sources of information:


Policy Chain - Connecting security policies of 3 abstraction levels

01 Apr 2013 | Written by Henrik Plate (SAP) - Contact Henrik

In very general terms, a policy is commonly understood as a "definite goal, course or method of action to guide and determine present and future decisions" [RFC3198]. In the scope of IT system management, policies are supposed to constrain and determine the behavior of computing systems, and exist in several abstractions, from high-level goal policies to low-level operational policies. The various policy abstractions are typically said to form a policy hierarchy, created in a top-down refinement process. This process starts with the specification of high-level goals that relate to business terms, and is completed by the specification of deployable policies that ultimately enforce the high-level policy in a given IT system, e.g., by the configuration of firewalls or authorization settings.

The specification and refinement of policies requires the collaboration of various stakeholders to ensure, for instance, that all relevant goals have been identified, that lower-level policies are indeed suitable for enforcing higher-level ones, or that policy conflicts are avoided. This collaborative work involves representatives from management, legal or vendor management, as well as security experts knowledgeable in the various technologies that constitute modern IT systems. According to today's practice, policies are typically represented in natural language - in early stages of the refinement process - or by concrete, product-specific configurations. The non-integrated representation of policies belonging to the different levels and the use of different media for policy storing and maintenance, however, hinders the efficient collaboration of involved stakeholders and increases the risk that high-level goal policies and enforcement policies diverge, which in turn can result in security and compliance issues.

PoSecCo aims to develop new methods and tools that support organizations in the policy refinement process. We thereby speak of a "policy chain" to emphasize the tight coupling of policy representations of different abstractions. This chain is established during a design-time refinement and optimization process, whereby the specification of policies on each layer is supported by dedicated tools. Policy designers at the various levels are supported in many ways, e.g., in the identification and resolution of policy conflicts, or the selection and automated configuration of suitable enforcement mechanisms. At runtime, the policy chain can be leveraged, for instance, to support audit activities, or to understand the impact of security misconfigurations.

The three different elements of the policy chain, Business Policies, IT Security Policies and Security Configurations, are represented by corresponding information models, each one of them linked to a corresponding model for functional matters. In other words, Business Policies (that comprise Security Requirements) are defined over a Business Model describing concepts such as Business Service, Customer or Supplier, IT Security Policies are defined over an IT Service Model that describes the architecture of an IT system by pointing to its main building blocks, interfaces, and communicatin channels, and the Security Configuration is defined over a Infrastructure Model that describes system details such as the network topology, application instances or actual communication endpoints, all of which has been modeled close to the DMTF standard CIM (Common Information Model).

The following video illustrates a simplified example of a PoSecCo policy chain by showing one functionality of the PoSecCo tool supposed to support audit activities. The JavaScript-based visualization of the policy chain shall allow judging the design effectivness of an organization's control framework, i.e., to understand whether designed security controls are suitable for achieving certain goals. The video at hand takes a example of a security requirement stemming from PCI DSS, which asks for the protection of cardholder data.


We first of all use the so-called Meta-Model Explorer to select the class whose instances we want to see. To visualize the policy chain, we select the class 'Security Requirement', which represents the top-most element of the policy chain. When switching to the so-called Instance Explorer, we see all instances of security requirements maintained in demo scenario, whereby the PCI DSS requirement is addressed by R3 and its two subrequirements R3.1 and R3.2, which concern cardholder data in transit and at rest. The former is fulfilled by the 'IT Security Policies' ITP100 and ITP102, which demand the protection of cardholder data when accessed through the Web interface, which is in turn enforced by the 'Security Configurations' CR6, CR7, and CR8, all of which enable SSL for given Web applications in the respective system.

Further information about important PoSecCo concepts will be communicated in upcoming blog posts, as well as in the PoSecCo newsletter.

The reader is welcomed to read to through the following documents in order to get additional informaton about the above-mentioned concepts:

PoSecCo Newsletter



PoSecCo releases video for ICT2013 in Vilnius! readmore


CIRRUS project announces its 3rd workshop and the launch of CEN workshop agreement readmore


PoSecCo demonstration @ ICT 2013, 6-8 November, Vilnius. readmore


PoSecCo final project evaluation underway! readmore


BIC IAG Annual Forum 2013 to be held in Vilnius, Lithuania during ICT 2013 readmore