Capítols de llibrehttp://hdl.handle.net/2117/61732024-03-19T12:19:44Z2024-03-19T12:19:44ZDomain-independent reference architectures and standardsMartínez Fernández, Silverio JuanFranch Gutiérrez, JavierAyala Martínez, Claudia Patriciahttp://hdl.handle.net/2117/3868312023-05-02T08:01:01Z2023-05-02T07:59:42ZDomain-independent reference architectures and standards
Martínez Fernández, Silverio Juan; Franch Gutiérrez, Javier; Ayala Martínez, Claudia Patricia
To remain competitive, organizations are challenged to make informed design decisions in order to construct software systems with similar architectural needs. They use reference architectures to achieve interoperability of (parts of) their software, standardization of software systems among multiple actors, and faster development with templates and guidelines for designing. In this chapter, we give an overview of domain-independent reference architectures and standards available in the gray literature for practitioners, classified by technology (cloud computing, big data, Internet of Things, and artificial intelligence) and analyzed based on their characteristics (e.g., purpose and contents). The discussed reference architectures are either published by standardization organizations such as ISO/IEC and NIST or fostered by large companies such as Microsoft and IBM.
2023-05-02T07:59:42ZMartínez Fernández, Silverio JuanFranch Gutiérrez, JavierAyala Martínez, Claudia PatriciaTo remain competitive, organizations are challenged to make informed design decisions in order to construct software systems with similar architectural needs. They use reference architectures to achieve interoperability of (parts of) their software, standardization of software systems among multiple actors, and faster development with templates and guidelines for designing. In this chapter, we give an overview of domain-independent reference architectures and standards available in the gray literature for practitioners, classified by technology (cloud computing, big data, Internet of Things, and artificial intelligence) and analyzed based on their characteristics (e.g., purpose and contents). The discussed reference architectures are either published by standardization organizations such as ISO/IEC and NIST or fostered by large companies such as Microsoft and IBM.An elastic software architecture for extreme-scale big data analyticsSerrano Garcia, Maria AstonMarín, César A.Queralt Calafat, AnnaCordeiro, CristovaoGonzález Hierro, MarcoPinho, Luis MiguelQuiñones Moreno, Eduardohttp://hdl.handle.net/2117/3740612022-10-06T14:36:01Z2022-10-06T08:05:53ZAn elastic software architecture for extreme-scale big data analytics
Serrano Garcia, Maria Aston; Marín, César A.; Queralt Calafat, Anna; Cordeiro, Cristovao; González Hierro, Marco; Pinho, Luis Miguel; Quiñones Moreno, Eduardo
This chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.
2022-10-06T08:05:53ZSerrano Garcia, Maria AstonMarín, César A.Queralt Calafat, AnnaCordeiro, CristovaoGonzález Hierro, MarcoPinho, Luis MiguelQuiñones Moreno, EduardoThis chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.Assessment of exams at Atenea, an IMS LTI application for scalability problemsAlier Forment, MarcCasany Guerrero, María JoséLlorens García, AriadnaAlcober Segura, Jesús ÁngelPrat Farran, Joana d'Archttp://hdl.handle.net/2117/3625852022-02-17T14:22:08Z2022-02-17T14:18:26ZAssessment of exams at Atenea, an IMS LTI application for scalability problems
Alier Forment, Marc; Casany Guerrero, María José; Llorens García, Ariadna; Alcober Segura, Jesús Ángel; Prat Farran, Joana d'Arc
The online teaching support platform was changed by the Universitat Politècnica de Catalunya in 2004. Moodle, an open source software, was used in place of the prior proprietary software. The population's confinement at home owing to the COVID-19 pandemic in 2020 was a stress test for the whole university community, particularly those responsible for providing support for the online teaching support platform. Concerns about possible scalability challenges with the quiz functionality during the examination period arose as a result of the increased activity and the prospect of the generalization of online assessment. The solution included deploying a high-performance version of the Moodle Quiz Module As A Service (SaaS) to plug-and-play within the university's LMS without relying on internal resources using Moodle's IMS LTI interoperability features. This study aims at analyzing the solution, which included a systems strategy, private cloud operations, internal communication, and teacher training, that allowed the university course to be assessed in a confined space.
2022-02-17T14:18:26ZAlier Forment, MarcCasany Guerrero, María JoséLlorens García, AriadnaAlcober Segura, Jesús ÁngelPrat Farran, Joana d'ArcThe online teaching support platform was changed by the Universitat Politècnica de Catalunya in 2004. Moodle, an open source software, was used in place of the prior proprietary software. The population's confinement at home owing to the COVID-19 pandemic in 2020 was a stress test for the whole university community, particularly those responsible for providing support for the online teaching support platform. Concerns about possible scalability challenges with the quiz functionality during the examination period arose as a result of the increased activity and the prospect of the generalization of online assessment. The solution included deploying a high-performance version of the Moodle Quiz Module As A Service (SaaS) to plug-and-play within the university's LMS without relying on internal resources using Moodle's IMS LTI interoperability features. This study aims at analyzing the solution, which included a systems strategy, private cloud operations, internal communication, and teacher training, that allowed the university course to be assessed in a confined space.Apps per al rastreig de contactes: present i futurOlivé Ramon, Antonihttp://hdl.handle.net/2117/3592782022-05-01T00:25:11Z2022-01-11T10:16:50ZApps per al rastreig de contactes: present i futur
Olivé Ramon, Antoni
2022-01-11T10:16:50ZOlivé Ramon, AntoniTeaching ethics and sustainability to informatics engineering students: an almost 30 years’ experienceCasany Guerrero, María JoséAlier Forment, MarcLlorens García, Ariadnahttp://hdl.handle.net/2117/3579102021-12-09T08:12:59Z2021-12-09T08:10:58ZTeaching ethics and sustainability to informatics engineering students: an almost 30 years’ experience
Casany Guerrero, María José; Alier Forment, Marc; Llorens García, Ariadna
A significant number of universities where engineering is taught, acknowledge the importance of the social and environmental impact of the scientific and technological practice, as well as the ethical problems it presents, and the need to provide their students with courses covering this as a subject. This paper presents 29 years of teaching courses with the subject of social, environmental, and ethical issues to students of Informatics Engineering. The table contents and its evolution over the years will be analyzed, plus the different teaching strategies applied, with emphasis on the collaborative learning methodologies to facilitate critical thinking and debate. During the experience, the course incorporated the subject of History of Informatics which proved to fit in the course. While the subject of Ethics and Sustainability is increasingly being regarded as an important matter to learn by future ICT engineers, the courses covering it remain as optional in the curriculums. This should change.
2021-12-09T08:10:58ZCasany Guerrero, María JoséAlier Forment, MarcLlorens García, AriadnaA significant number of universities where engineering is taught, acknowledge the importance of the social and environmental impact of the scientific and technological practice, as well as the ethical problems it presents, and the need to provide their students with courses covering this as a subject. This paper presents 29 years of teaching courses with the subject of social, environmental, and ethical issues to students of Informatics Engineering. The table contents and its evolution over the years will be analyzed, plus the different teaching strategies applied, with emphasis on the collaborative learning methodologies to facilitate critical thinking and debate. During the experience, the course incorporated the subject of History of Informatics which proved to fit in the course. While the subject of Ethics and Sustainability is increasingly being regarded as an important matter to learn by future ICT engineers, the courses covering it remain as optional in the curriculums. This should change.Constructing and using software requirement patternsFranch Gutiérrez, JavierQuer, CarmeRenault, SamuelGuerlain, CindyPalomares Bonache, Cristinahttp://hdl.handle.net/2117/1738902020-07-23T23:20:35Z2019-12-13T11:01:09ZConstructing and using software requirement patterns
Franch Gutiérrez, Javier; Quer, Carme; Renault, Samuel; Guerlain, Cindy; Palomares Bonache, Cristina
Software requirement reuse strategies are necessary to capitalize and reuse knowledge in the requirement engineering phase. The PABRE framework is designed to support requirement reuse through the use of software requirement patterns. It consists of a meta-model that describes the main concepts around the notion of pattern, a method to conduct the elicitation and documentation processes, a catalogue of patterns, and a tool that supports the catalogue’s management and use. In this chapter all these elements are presented in detail making emphasis on the construction, use and evolution of software requirement patterns. Furthermore, the chapter includes the construction of a catalogue of nontechnical software requirement patterns for illustration purposes.
2019-12-13T11:01:09ZFranch Gutiérrez, JavierQuer, CarmeRenault, SamuelGuerlain, CindyPalomares Bonache, CristinaSoftware requirement reuse strategies are necessary to capitalize and reuse knowledge in the requirement engineering phase. The PABRE framework is designed to support requirement reuse through the use of software requirement patterns. It consists of a meta-model that describes the main concepts around the notion of pattern, a method to conduct the elicitation and documentation processes, a catalogue of patterns, and a tool that supports the catalogue’s management and use. In this chapter all these elements are presented in detail making emphasis on the construction, use and evolution of software requirement patterns. Furthermore, the chapter includes the construction of a catalogue of nontechnical software requirement patterns for illustration purposes.Integration-oriented ontologyNadal Francesch, SergiAbelló Gamazo, Albertohttp://hdl.handle.net/2117/1299312020-07-23T20:49:26Z2019-02-28T08:50:03ZIntegration-oriented ontology
Nadal Francesch, Sergi; Abelló Gamazo, Alberto
The purpose of an integration-oriented ontology is to provide a conceptualization of a domain of interest for automating the data integration of an evolving and heterogeneous set of sources using Semantic Web technologies. It links domain concepts to each of the underlying data sources via schema mappings. Data analysts, who are domain experts but not necessarily have technical data management skills, pose ontology-mediated queries over the conceptualization, which are automatically translated to the appropriate query language for the sources at hand. Following well-established rules when designing schema mappings allows to automate the process of query rewriting and execution.
2019-02-28T08:50:03ZNadal Francesch, SergiAbelló Gamazo, AlbertoThe purpose of an integration-oriented ontology is to provide a conceptualization of a domain of interest for automating the data integration of an evolving and heterogeneous set of sources using Semantic Web technologies. It links domain concepts to each of the underlying data sources via schema mappings. Data analysts, who are domain experts but not necessarily have technical data management skills, pose ontology-mediated queries over the conceptualization, which are automatically translated to the appropriate query language for the sources at hand. Following well-established rules when designing schema mappings allows to automate the process of query rewriting and execution.Esclatec: recerca + desenvolupament + innovació x inclusióCabré Garcia, José M.Climent Vilaró, JoanLópez Álvarez, DavidMartín Escofet, CarmeSánchez Carracedo, FermínVidal López, Eva Maríahttp://hdl.handle.net/2117/1183232020-07-23T20:59:13Z2018-06-21T18:48:17ZEsclatec: recerca + desenvolupament + innovació x inclusió
Cabré Garcia, José M.; Climent Vilaró, Joan; López Álvarez, David; Martín Escofet, Carme; Sánchez Carracedo, Fermín; Vidal López, Eva María
2018-06-21T18:48:17ZCabré Garcia, José M.Climent Vilaró, JoanLópez Álvarez, DavidMartín Escofet, CarmeSánchez Carracedo, FermínVidal López, Eva MaríaOn-line analytical processingAbelló Gamazo, AlbertoRomero Moral, Óscarhttp://hdl.handle.net/2117/1019182020-07-23T21:18:28Z2017-03-03T14:04:44ZOn-line analytical processing
Abelló Gamazo, Alberto; Romero Moral, Óscar
On-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence.
2017-03-03T14:04:44ZAbelló Gamazo, AlbertoRomero Moral, ÓscarOn-line analytical processing (OLAP) describes an approach to decision support, which aims to extract knowledge from a data warehouse, or more specifically, from data marts. Its main idea is providing navigation through data to non-expert users, so that they are able to interactively generate ad hoc queries without the intervention of IT professionals. This name was introduced in contrast to on-line transactional processing (OLTP), so that it reflected the different requirements and characteristics between these classes of uses. The concept falls in the area of business intelligence.Measuring the quality of open source software ecosystems using QuESoFranco Bedoya, Óscar HernánAmeller, DavidCostal Costa, DolorsFranch Gutiérrez, Javierhttp://hdl.handle.net/2117/1014722020-07-23T20:43:17Z2017-02-23T14:04:22ZMeasuring the quality of open source software ecosystems using QuESo
Franco Bedoya, Óscar Hernán; Ameller, David; Costal Costa, Dolors; Franch Gutiérrez, Javier
2017-02-23T14:04:22ZFranco Bedoya, Óscar HernánAmeller, DavidCostal Costa, DolorsFranch Gutiérrez, Javier