Categories
Uncategorized

Onnea Vaadin!

Uudet kasvumoottoriekosysteemit käyntiin: miljardikasvua odotetaan autonomisesta liikenteestä, digitaalisista alustoista ja muovin poistamisesta vesistöistä – Business Finland

Business Finland on valinnut uusimman kasvumoottorikilpailutuksen voittajiksi neljä yritystä: Unikie, Vaadin, Family in Music ja Lamor. Kukin yritys on sitoutunut tavoittelemaan vähintään miljardin euron edestä uutta innovaatiolähtöistä liiketoimintaa ekosysteemeissään, joiden avainyrityksinä ne toimivat. Kasvumoottoreita toteutetaan yritysvetoisella yritysten, tutkimusorganisaatioiden ja julkisten toimijoiden kumppanuusmallilla.

Lähde: Business Finland 2020

Categories
Uncategorized

Great applications are built on forgettable APIs

Written by Vaadin Leif Åstrand

An API is an interface that application developers use to make their application interact with some other application, or another part of the same application if it has modular structure. The purpose of this interaction is to transmit some kind of data between the applications, such as fetching the latest weather forecast or submitting tax records to the appropriate authorities. The API defines how the data should be structured in different types of messages sent between the communicating parties.

At the end of the day, the purpose of data and applications is to serve the needs of human users. The relationship is sometimes only indirect in case there are multiple intermediate applications passing data between each other before it reaches the application that the user is using. Since humans and computers have quite different means of communication, a different type of interface is needed instead of an API. This is the user interface, or UI.

From the point of view of an application user, the UI is all that matters. Everything else is an implementation detail to them. They only care about getting their job done as efficiently as possible, but they aren’t concerned about how that is carried out. API use matters no more than any other technicalitty such as which programming language is used to implement the application or what kind of hardware is used to host any server-side functionality of the application.

Application developers serve their users

It is the application developer’s job to make all of this come true for the end user. They should design and implement the UI based on the needs of the user and not based on what’s convenient with the APIs they are using. If the API uses a weird format or structure for its data, then the application should take care of converting that to something that makes sense for the user. If there is a risk that the API isn’t always available, then the application developer should ensure suitable fallbacks are in place. Finally, the application needs to mitigate the risk that the API might change in the future.

From a more practical point of view, this typically means that the application should use an anti-corruption layer between the API and the rest of the application. This makes the API quite forgettable since developers working on other parts of the application won’t have to remember all the details that are encapsulated in the anti-corruption layer. Encapsulating the API does also have secondary benefits such as making testing easier.

Dealing with potential reliability or performance issues with an API leads to some additional considerations. At the very least, the application needs to have timeouts or circuit breakers in place to deal with the unexpected. If circumstances allow, it’s also very beneficial for the application to have a local caching layer to further isolate the user from the API. It’s also essential to design the UI in a way that any problematic situation is clearly understandable to the user. Anything that can go wrong will go wrong, eventually.

It’s lots of work for the application developer to do all of this, but it’s their job. Otherwise, the user might just as well use the original API as-is and the application would thus be worthless. The biggest challenge might be how to know when enough is enough.

API providers serve application developers

It is the API provider’s job to make all of this come true for the application developer, and by extension also for the end user. It will be easy for the application developer to implement an appropriate anti-corruption layer if the API is well designed. The same also goes for dealing with potential reliability and performance issues.

This means that the provider should follow good practices for API design. Calling conventions should follow established patterns. Data should be structured in a logical way with intuitive names. Performance and availability should be sufficient and consistent.

The API should do its thing as expected without causing surprises for the application developer. This in turn makes the API forgettable.

It’s still worth it

All of this sounds like an awful lot of work for everyone involved – except the user who just gets to enjoy a good UI design that works reliably. At the same time, the rewards are immense. That’s why software developers provide and use forgettable APIs despite the efforts involved.

Categories
Uncategorized

AI as a composer:

Using algorithms to compose music in pre-defined styles

With this AIVA – The AI composing emotional soundtrack music
4APIs wish relaxing season time.

Categories
Uncategorized

On API monitoring

Written by Elina Kettunen (UH)

API monitoring is often understood as monitoring the availability, performance, and functional correctness of an API. That is, API monitoring means technical monitoring of  the API behavior during runtime and it covers different measures, such as monitoring how many times an API is called per hour, how fast are the response times, and what is the API resource consumption. API monitoring is a part of API management and, in addition to the basic API monitoring, it is common to add security features, such as audit logs, that aim to answer questions like “who, what, where and when”. 

The goals of web API monitoring can vary depending on the nature of the API. For example, simple, free, information providing APIs mainly wish to keep a record on how many times a single API client contacts the API per hour, as there are limits on how many calls are allowed. Other, more safety critical APIs need a far more comprehensive API monitoring scheme with several different monitoring metrics, monitored resources, alerting policies, and audit logs. 

API uptime monitoring is considered one of the most important monitoring metrics. As the costs of API downtime can be substantial, being able to quickly get notifications on an API being unavailable can be vital for API providers [1, 2]. Also, an API failure can be more catastrophic than an application failure, because a broken API affects potentially multiple applications and users that depend on the API. Thus, there is a need for performance data collection besides usage statistics and API monitors should mimic expected usage scenarios [3]. API monitoring can also cover data validation (i.e. checking if the data the API sends or receives is valid) and Service-Level Agreement (SLA) satisfaction [4].

According to Broadcom’s API monitoring guide, typical DevOps tools, like application performance monitoring tools, may not be able to detect why the API is having a performance issue, and this requires specific API monitoring tools. It is important also to monitor third-party APIs so that issues will be quickly identified and reported to the API producers [4]. 

Often a distinction is made between synthetic and Real-User API monitoring. Synthetic monitoring includes, for example, uptime and performance monitoring, and the API behaviour is analysed by using emulations or simulations for the application environment, scripted tests, API mocks, and service virtualization. Real-User API monitoring covers topics like user experience and transaction performance, and the aim is to use actual users to test the application in real-world environments. This may not be always feasible, but it is especially important for mission-critical APIs [4]. 

From a security perspective, monitoring an API can be used to detect anomalies in user behaviour. If a user starts, for example, accessing certain API operations in a pattern that differs from the usual patterns, it may indicate a potential security issue. If the monitoring system detects such behaviour, it can send an alert to the IT security team [5].

There are several different tools that focus on API monitoring and also tools that provide the whole system for API management. Tech marketplace G2 has a comprehensive list of available API management tools [6], and in their list of the 20 highest rated API management solutions, Postman is at the top. There are also several lists of tools focused on API monitoring available, see e.g. the lists by Comparitech [7] and Nordic APIs [8].

As cloud services are nowadays widely used, service providers like Google Cloud, Amazon Web Services (AWS) and Azure provide many tools for web API monitoring and logging. Often some metrics are free and some available for an extra charge. The cloud service user can pick from a list of available API monitoring metrics those that are the most important for their application. For example, with Azure, the most frequently used metrics are capacity (based on gateway resources like CPU and memory consumption) and requests (number of gateway requests) [9]. In addition to basic API monitoring, the cloud services also provide components for security monitoring and, for example, API access control. As cloud services operate on a pay by usage principle, for example Google Cloud billing alerts can be used to enhance security by monitoring Cloud usage and sending alerts if unexpected consumption is detected [10]. 

Nowadays, APIs are increasingly important for many different types of businesses, and the need for API management and monitoring is growing. With a good API monitoring system and security components, an API provider can monitor API performance and uptime, gain valuable information on the API usage patterns, and detect anomalous calls to the API. There are many tools and solutions available for API monitoring and management, and it seems that more challenging than API monitoring itself is deciding how comprehensive the monitoring should be and understanding the collected data, whether it is about API usage or the content of API calls.

References

[1] https://blogs.gartner.com/andrew-lerner/2014/07/16/the-cost-of-downtime/

[2] https://geekflare.com/api-monitoring-tools/

[3]https://smartbear.com/learn/performance-monitoring/guide-to-api-monitoring/

[4]https://docs.broadcom.com/docs/building-an-api-monitoring-practice

[5] Thielens, J. 2013, “Why APIs are central to a BYOD security strategy”, Network Security, vol. 2013, no. 8, pp. 5-6.

[6] https://www.g2.com/categories/api-management

[7]https://www.comparitech.com/net-admin/best-rest-api-monitoring-tools/

[8] https://nordicapis.com/10-api-monitoring-tools/

[9]https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor

[10] Google Cloud security foundations guide 2020, Google white papers, available at https://cloud.google.com/security/best-practices

Categories
Uncategorized

Digital Platforms and Value Creation

Written by Prashanth Madhala Tampere University

What is a digital platform? A digital platform is a place where producers and consumers exchange information, goods, and services; the digital platform is also a community which brings the above said elements together to co-create value [1]. It allows for participation of and connecting multiple members to it so they can interact, exchange, and create value [2].

To create a successful digital platform, there are key aspects such as use of APIs for connectivity to allow 3rd party members complement the platform with their capabilities, facilitating exchange between the users, trustworthiness & security, usability, ability to scale without affecting the performance in a bad way [1]. Success with digital platform also depends on users (over technologies involved) who are adopting the platform [3]. For example, traditional companies create value within well-defined boundaries, whereas digital platforms co-create value with ecosystem of autonomous participants [4].  Some of the key attributes of a digital platform is that it is a technology-enabled business model, eases communication between several members such as users and producers, value created is proportional to the size of the community, enables trust with regards to data ownership and intellectual property rights, promotes compelling user experience, gives rise to innovative business models [2].

Digital platform has its benefits such as creating new products & services, promotes revenue, enhance profitability and customer experience, operational improvement [5], reduces cost, fosters collaboration and innovation, faster movement of products to the market [2]. Examples of digital platform are social platforms, knowledge platforms, application stores, crowdsourcing platforms, media platforms, infrastructure platforms [2][3].

There are key functions that allow digital platforms to create value. First, the platform must have an audience, the platform must have an audience consisting of consumers and producers facilitating interactions; Second, platform should be able to match consumers and producers for the creation of value; Third, platforms must enable external innovation by the use of APIs in addition to management support; Finally, governance is a very important aspect for governing interactions between all participants [6].

According to [7], in digital service platforms, there are several sources of value created by platform participants; the author provides case study from uber. In the study, an empirical framework is provided which contains three resource combinations namely digital service platform sales, digital service platform safety, digital service platform operation. The sources of value associated with service sales are transaction processing capabilities (tangible, economic value: value-in-use perspective), review and rating system (tangible, intrinsic value: value-in-use perspective), publicity and exploitation (in-tangible, economic value: value-in-use perspective); Second, the safety aspect of the digital service platforms stem from technology reliability (mutual intrinsic value: for e.g. between two participants) and safety (from transparency, intrinsic value); Third, operation of the digital service platforms where the sources of value come from membership (intrinsic value: value-in-exchange), work flexibility (value-in-exchange), rewards and support (economic value: value-in-use). Use value is subjective is defined by the perceived usefulness of the offering, whereas exchange value is recorded when there is a sale between a buyer and seller.

In summary, the term digital platform is defined and understood; digital platforms brings together producers and consumers to co-create value and fosters a community that can continue to co-create value; there are many benefits associated with digital platform such as profitability, innovation, and new business models; for a digital platform to function, there are key attributes and requirements, The sources of value creation is also identified through different types of value such as value-in-use and value-in-exchange.

References

[1]-https://www.bmc.com/blogs/digital-platforms/

[2]-http://stephane-castellani.com/everything-you-need-to-know-about-digital-platforms/

[3]-https://enterprisersproject.com/article/2018/12/what-digital-platform

[4]- Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., & Krcmar, H. (2020). Digital platform ecosystems. Electronic Markets30(1), 87–98.

[5]- https://xmpro.com/what-is-a-digital-business-platform-and-why-should-i-care/

[6]- https://www.applicoinc.com/blog/how-do-platforms-create-value/

[7]- Mansour, O., & Ghazawneh, A. (2017). Value creation in digital service platforms. In Proceedings of the 28th Australasian Conference on Information Systems, ACIS 2017. Formal Power Series and Algebraic Combinatorics.

Categories
Uncategorized

Notes on REST and GraphQL

Written by Elina Kettunen (UH)

The RESTful web API has been the standard API design architecture, yet during the recent years APIs based on GraphQL Schema Definition Language have gained popularity. In this post the idea is to look into both advantages and disadvantages of REST- and GraphQL-based web APIs. The findings from scientific literature and blog posts are summarised in Tables 1 and 2. 

GraphQL [1] was published by Facebook in 2015, and its key characteristics are support for a hierarchical data model and client-specific queries. The hierarchical data model can reduce the number of endpoints the API clients need to access and the clients can ask only for the data they need [2]. With REST APIs, common problems have been either over-or under-fetching of data; sometimes the client only wishes to show a small part of the available data but the API endpoint provides too much information in the JSON file, and sometimes the client is required to access multiple endpoints to fetch all desired data. Using GraphQL does not necessarily lead to the reduction in the number of queries the API clients need to perform, but it has been shown that in some cases GraphQL can reduce the size of returned JSON documents by over 90% (number of fields and byte size) compared to REST-based APIs [2].

In one study, implementing GraphQL API queries was found to be easier to learn than implementing queries to REST APIs, especially if the query contained multiple parameters [3]. However, implementing a GraphQL server requires certain components such as a GraphQL execution unit, schema and resolvers on the API layer [4], which add complexity and might make using GraphQL overkill for simple applications [5]. Also, with REST APIs, using HTTP caching is easy, but since with GraphQL the idea is to provide a single endpoint for API queries, implementing caching requires extra effort from the developer [2, 5]. 

GraphQL is strongly typed, which means the clients must adhere to data type definitions when making queries, adding to the security of GraphQL [2]. With REST APIs, handling different call types and returned data formats is relatively easy, but file uploading was initially not supported by GraphQL and support for file upload requires installing a library [6]. Since client applications are able to see all fields in GraphQL, information hiding is not well supported [2].

Large, complex nested queries are a potential performance and security problem for GraphQL servers, since processing large queries requires considerable effort and may lead to denial of service if the server cannot cope with the requests. Limiting the query complexity and depth is therefore necessary [7]. 

As a still emerging technology, GraphQL does not have as much tool support as REST APIs. The lack of monitoring tools has been mentioned in one blog post [8]. GraphQL has also lacked means for integrating multiple interfaces that are important in distributed systems. A solution to this problem is to use GraphQL Federations, which are common interfaces used on top of existing GraphQL web service interfaces. Besides an existing solution (Apollo Federation [9]) researchers have also proposed a novel framework to address the multiple interface integration problem [10]. 

REST APIs are generally considered simple and inexpensive with good interoperability, flexibility, scalability and robustness. GraphQL provides great flexibility on the frontend as changes in data usage usually need not to affect the server. GraphQL and REST do not need to be alternatives to each other as they can be used in parallel. GraphQL is well suited for example API gateways [4]. There are several GraphQL engines that support major programming languages and different query planning tools can translate GraphQL queries into other query languages [11]. Also, deprecating fields is easy and adding new fields to a type does not lead to breaking changes in GraphQL APIs, which makes versioning easier [2]. With GraphQL, API developers can avoid over- and under-fetching issues, and gain performance benefits by reducing the overhead from transferring large JSON files.

Table 1. Advantages and disadvantages of REST API design architecture.

Table 2. Advantages and disadvantages of GraphQL design API architecture.

References

[1] https://graphql.org/

[2] Brito, G., Mombach, T. & Valente, M.T. 2019, “Migrating to GraphQL: A Practical Assessment”, SANER 2019 – Proceedings of the 2019 IEEE 26th International Conference on Software Analysis, Evolution, and Reengineering, pp. 140-150. 

[3] Brito, G. & Valente, M.T. 2020, “REST vs GraphQL: A controlled experiment”, Proceedings – IEEE 17th International Conference on Software Architecture, ICSA 2020, pp. 81-91. 

[4] Vogel, M., Weber, S. & Zirpins, C. 2018, Experiences on Migrating RESTful Web Services to GraphQL. In: International Conference on Service-Oriented Computing. Springer, Cham, 2017. p. 283-295.

[5] https://medium.com/@back4apps/graphql-vs-rest-62a3d6c2021d

[6] https://levelup.gitconnected.com/how-to-add-file-upload-to-your-graphql-api-34d51e341f38

[7] Wittern, E., Cha, A., Davis, J.C., Baudart, G. & Mandel, L. 2019, An Empirical Study of GraphQL Schemas. In: International Conference on Service-Oriented Computing. Springer, Cham, 2019. p. 3-19.

[8] https://www.moesif.com/blog/technical/graphql/REST-vs-GraphQL-APIs-the-good-the-bad-the-ugly/

[9] https://www.apollographql.com/docs/federation/

Categories
Uncategorized

APIs have become a crucial asset for many businesses

Written by Saeid Heshmatisafa Tampere University

For decades, many companies have developed APIs as a means for their partners to share information and facilitate integration. In the past, integrating software products with business partners might have taken 12 to 18 months, while APIs not only accelerate the process but also allow many business partners to collaborate and interact within an ecosystem. Web APIs form modern middleware that provides access to any type of content, data, and other digital assets to build creative desktop, mobile, and web applications.

In today’s digital economy, the value of your assets will remain limited if it is isolated within the individual business ecosystem. The outside-in practice of open innovation has led many companies to look outside of their organizational boundaries for the next novel ideas in order to expend their products and services and maximizing the value of their technologies. Open APIs are one of many ways that enable companies to gain positive network effect along with spotting emerging trends, developing new products/services, and creating a digital economy. It can be argued that APIs are the components that enable various apps, systems, and platforms to connect and share data with partners and third parties to develop new API-consuming solutions. Moreover, such a strategy can be seen in the case of Visma’s e-sign, the company could gain 180 customers with their traditional UI product. Nevertheless, as they began to embrace risking and move from the utilization of APIs primary for supporting integrations towards accepting third party users, the number of customers increased to 180,000 (by 100 times). Currently, the digital signature service has approximately 36,000 users.

ProgrammableWeb.com, one of the most popular API directory, has more than 23,539 public APIs in its database. Currently, the average single application is integrated by 18 APIs, and 50% of the B2B collaborations are powered through APIs. Most significantly, according to Akamai’s Tony Lauro, APIs are responsible for 83% of all web traffic. Google Maps, one of the earliest open APIs, can be found in many applications that include a geospatial aspect. For instance, Uber is a mobile application connecting taxi drivers (providers) to customers/consumers. However, behind the scene, Uber is an ecosystem of providers; Google Maps is used as a front-facing interface, Stripe handles the payments, Twilio for communications, and many more aiding the process of developing value chains of APIs.

It is evident that API technology has surpassed its original purpose as a “technical asset” and became the major force of the economy. A significant revenue stream of many incumbent companies is generated by their APIs, such as eBay’s (60 percent), Salesforce (50 percent), Expedia (90 percent), and Amazon Web Services (full 100 percent). APIs not only open a new monetization stream from “shelved intellectual properties,” but it also incentivizes the emergence of API-first companies and building a lucrative platform empire. For instance, Salesforces merged with MuleSoft – an API management system – for a value of $6.5 billion. Another example is the acquisition of Plaid – a startup company specialized in the development of APIs to connect payment apps such as Venmo to share banking and other financial information – by Visa for $5.3 billion.

APIs are not just another high-tech product; they are the next generation of the internet. Enterprises that take technological risks and embrace digital strategy are more likely to become the next digital transformation leaders. However, API strategies vary depending on the type of role a firm aims. In this regard, companies can experiment by exposing non-core assets through open APIs. Nonetheless, firms need to take a couple of critical capabilities into account when designing an open API. First, the value exposed by the API must be unique and useful. Second, API must be planned to address a well-defined need; exposing existing capabilities may require redesign and implementation from the perspective of potential users. Third, the simplicity and flexibility to meet user needs and styles of consumption. Fourth, ensuring consistent accessibility and operation. Fifth, it must be supported and extended throughout its lifetime. API providers require to invest a great deal of effort into the creation of the ecosystem and support existing users, advocate new users, and evangelizes. Thus, APIs have become a crucial asset for many businesses. 


Categories
Uncategorized

Notes on blockchains

Written by Elina Kettunen (UH)

The best-known applications of blockchain technology are cryptocurrencies, but there is considerable interest in applying blockchains as a data storage method in various different fields. Blockchains can be used to record transactions in a reliable, secure and immutable manner. Transactions are saved to linked blocks that form a digital, encrypted ledger. Each party or node in a blockchain maintains a copy of this immutable ledger. Consensus among the parties is achieved by using, for example, proof-of-work. 

Blockchains can be permissionless or permissioned. Permissionless, public blockchain systems allow anyone to join the blockchain, whereas permissioned, private blockchain systems use membership control to allow only identified parties to join. In public blockchains, anyone can read or write data, but while reading is free, writing to a blockchain requires paying a fee in cryptocurrency. The fee will be paid to a miner who first completes the proof-of-work to secure a new block containing the transaction data. In private blockchains, the owner of the blockchain can decide on the transaction fees. 

Blockchains can be used to eliminate the need for trust among the parties sending transactions to each other. All transactions are visible in the distributed ledger and tampering the transaction history would require the malicious party taking control of the majority (51%) of the blockchain network’s mining hash rate.

Ethereum [1] is the most popular permissionless blockchain that allows writing of smart contracts on the blockchain. Smart contracts consist of contracts or business logic that are installed on the blockchain system. Parties in the blockchain can execute smart contracts to create different transactions that are then validated by other parties and saved to the blockchain.

There are several different platforms for building permissioned, private blockchains. Some of the most widely used are Hyperledger [2] and R3’s Corda [3]. Private blockchains are meant to allow saving sensitive data to a blockchain so that only selected parties are able to view it. However, it is possible to save encrypted, private data also to a public blockchain.  

Since public blockchains use computationally more expensive consensus protocols and have more nodes, private blockchains can potentially offer better scalability and faster transactions. However, private blockchains are not truly decentralized and, for example, Hyperledger Fabric was found at least in 2019 to have issues with network delays causing desynchronization in the blockchain [4]. 

There has been a lot of interest in the possibilities of blockchain technology, and hopes of revolutions in many different areas such as finance and Internet of Things (IoT). Blockchains can provide secure ways to manage confidential data and identity information, and thus provide potential use cases also in health care. 

However, so far there are only a few fully operational blockchains besides systems related to cryptocurrencies and Bitcoin [5] remains the most successful real-world application of blockchain technology. 

Ethereum was the first blockchain to support the implementation of smart contracts, which enable building decentralized applications (dapps) on Ethereum blockchain. There are various potential use cases for dapps and plenty of tutorials on dapp development available online. Despite this, most dapps have practically no users or transactions. On 9th of June 2020 on DappRadar [6], the list of Ethereum dapps showed over 1880 dapps, but only 330 had had at least one user during the previous seven days. All Top Ten dapps (based on user count during the previous seven days), save one, were related to money exchange, high risk investments and decentralized finance. There are some gaming applications on Ethereum (e.g. CryptoKitties, My Crypto Heroes), but most dapps appear to be related to finances and gambling. 

In Deloitte’s Global Blockchain survey 2019, 53% of organizations saw blockchains as critical and being in top five of their priorities [7]. In the same survey one of the top five “organizational barriers to greater investment in blockchain technology” was the lack of in-house capabilities. As the need for blockchain professionals is likely to grow in the future, in Finland a project has been launched to provide education on blockchain technology in universities [8]. 

Supply chains are one area where the use of blockchain technology can potentially streamline the process and reduce paperwork besides creating a transparent, immutable record of the product history. However, Gartner’s report (2019) estimates that “contrary to initial market hype and for the time being, blockchain is not enabling a major digital business revolution, and may not enable one until at least 2028” [9]. This is due to several factors that currently make it challenging for organisations to adopt blockchain technology. 

In a blockchain system, there is overhead from replicating the data. For example, in Ethereum, those users that host the full node need approximately 180 GB disk space [10]. However, not all users need to download the full node, as there are also light nodes that store only the transaction headers and are able to request other information from full nodes. Light nodes can be used, for example, in mobile phones or embedded devices.

For instance, Peker et al. (2020) studied the cost of saving IoT sensor data to Ethereum blockchain. In their experiment the cost of storing 6000 data points (256-bit integers) was approximately 335 – 467 US dollars depending on the method used [11]. According to other informal estimates, the cost of storing 1kB of data to Ethereum blockchain would have been approximately 1.6 US dollars in 2018, and the storage of 1GB was over 1.6 million US dollars. According to Kumar et al. (2020) “the cost of storage on a public blockchain platform can be staggering, a few thousand times higher than on a distributed database system or in the cloud. On a permissioned blockchain system, the cost is likely to be less but still one or two orders of magnitude higher.” [12]

Due to mining being computationally expensive, public blockchain systems consume more energy than regular distributed databases. Bitcoin mining is notoriously energy consuming, and sustainability issues are one area where more research is needed. The popularity of Bitcoin and other cryptocurrencies has also led to different scams related to cryptocurrencies, and for example, infected websites harnessing visitors’ computers to mine Bitcoin.

In their article Kumar et al. (2020) suggest that “blockchain technology should be deployed selectively, mainly for interorganizational transactions among untrusted parties, and in applications that need high levels of provenance and visibility.” For example, tracking the origin and shipping of precious gemstones or other expensive or critical commodities is one area where blockchain systems have been tried. Regarding supply chains, major challenges for using blockchains include creating legislation and standards all parties can agree on, and getting everyone to use blockchain technology despite additional costs. Joining a consortium is usually necessary to properly utilize blockchains.

To conclude, as blockchains are today a high-cost and high-overhead storage method, careful consideration is needed to determine the proper use cases. Also, a decision should be made on what data to store to the blockchain, as it might be feasible to store only the most critical parts of the whole data to save resources. Blockchain technology is being piloted in various different fields, and in the future, blockchains are likely to be utilized in a much wider scale.

References and recommended reading

[1] https://ethereum.org/en/

[2] https://www.hyperledger.org/

[3] https://www.r3.com/corda-platform/

[4] Nguyen, T.S.L., Jourjon, G., Potop-Butucaru, M. & Thai, K.L. 2019, “Impact of network delays on Hyperledger Fabric”, INFOCOM 2019 – IEEE Conference on Computer Communications Workshops, INFOCOM WORKSHOPS 2019, pp. 222-227.

[5] https://bitcoin.org/en/

[6] https://dappradar.com/rankings/protocol/eth

[7] Deloitte’s Global Blockchain survey 2019

[8]https://www.eura2014.fi/rrtiepa/projekti.php?projektikoodi=S22027

[9] Gartner 2019: Blockchain Unraveled: Determining Its Suitability for Your Organization https://www.gartner.com/en/doc/3913807-blockchain-unraveled-determining-its-suitability-for-your-organization

[10]https://medium.com/@marcandrdumas/are-ethereum-full-nodes-really-full-an-experiment-b77acd086ca7

[11] Peker, Y.K., Rodriguez, X., Ericsson, J., Lee, S.J. & Perez, A.J. 2020, “A cost analysis of internet of things sensor data storage on blockchain via smart contracts”, Electronics (Switzerland),vol. 9, no. 2. 

[12] Kumar, A., Liu, R., Shan, Z. 2020, “Is Blockchain a Silver Bullet for Supply Chain Management? Technical Challenges and Research Opportunities”, Decision Sciences 51 (1), pp. 8-37.

Categories
Uncategorized

Asiakasarvo ohjelmistojen ratkaisusuunnittelussa – tutkittua tietoa

Väitöskirjan nimi on “Solution Planning from the Perspective of Customer Value”

Marko Komssin tuoreessa väitöskirjassa analysoitiin ohjelmistosuunnittelun ongelmia asiakasarvon näkökulmasta. Keskeinen havainto oli, että ohjelmistoyritysten ratkaisusuunnittelu kärsii tuoteominaisuuksiin keskittyvästä kapeakatseisesta ajattelusta ja ”tulipalojen sammutustyö -syndroomasta”. Nämä asiakasarvoon liittyvät ongelmat vaikuttavat olevan kulttuurisia ja sellaisina vaikeasti korjattavia.

Komssin tutkimus osoittaa, että ohjelmistojen ratkaisusuunnittelussa asiakkaan tehtävien holistinen ja varhainen analysointi lisää asiakasarvoa. Tutkimuksessa esitellään konkreettisia käytäntöjä, joiden avulla asiakkaiden tehtäviä voidaan kuvata ja priorisoida tiekarttapohjaisessa ratkaisusuunnittelussa. Koska asiakkaiden tehtävät eivät tyypillisesti muutu kovin usein, ne mahdollistavat tuoteominaisuuksia pidemmän aikavälin näkymän ratkaisusuunnitteluun. Analysoinnilla olisi voimakkaampi myönteinen vaikutus ratkaisusuunnitteluun, jos yritysten strategiset prosessit ja kulttuuri korostaisivat asiakasarvoa.

Vastaväittäjä: professori Pekka Abrahamsson, Jyväskylän yliopisto

Kustos: professori Marjo Kauppinen, Aalto-yliopiston perustieteiden korkeakoulu, tietotekniikan laitos

Väittelijän yhteystiedot: Marko Komssi, F-Secure, p. 040 5323753, marko.komssi@f-secure.com

Elektroninen väitöskirja: https://aaltodoc.aalto.fi/handle/123456789/45395Väitöstilaisuus järjestetään etäyhteydellä Teamsissa:
Linkki Teams kokoukseen

Categories
Demos Uncategorized

Utilisation of 3D -BIM and API for moisture risk management during construction

Indoor air problems are unfortunately common even in new buildings. These problems may be a result of poor structural designs or misuse, but more commonly they occur due to mistakes during construction.

Wall panel and couple hollow slabs waiting for assembly on site.

Structural engineers are familiar with the problematic designs. Consequently, with active review processes and utilisation of BIM (building information model) these can be handled. But what about the mistakes during construction?

New modular methods are reducing the problems with construction, as more and more construction phases are made inside protected from environmental elements. However, all buildings cannot be built modular and even with modules, some of the work needs to be done on site, for example final assembly. Transporting the modules and panels from the factory to the site is always a risk. Delays in the assembly are another significant risk since this means that the panels are stored temporary on site to wait for their assembly. Consequently, the panels may be exposed to harsh and difficult conditions for extended periods. Modules are typically protected at the factory with a plastic film. However, there is always a risk that the protective plastic film is damaged during the transportation or it is removed too early, as happened in the Wood City case (https://www.rakennuslehti.fi/2017/11/diplomityo-kertoo-miksi-wood-cityn-kosteudenhallinta-petti/).


Unprotected wall panel waiting assembly. https://www.rakennuslehti.fi/wp-content/uploads/2017/11/woodcity-1280×720.jpg

Incompletely protected panels and panels and modules with damaged protection are at risk to get damp before assembly, which can later lead to Indoor air problems

How new technologies may help in moisture risk management

Utilising IoT-sensors and 3D BIM models makes measuring temperatures and relative humidity in real-time possible.  Could the construction related problems be avoided or at least noticed early enough to make it possible to fix them without excessive dismantling of the building if sensors were used?  Adding humidity and temperature sensors to modules and panels in the factory and monitoring values in real-time during construction would give a warning in case the structure became damp. This way a drying process could be started before actual microbe damages formed.  Adding the sensor data into the 3D -BIM model via API (application programming interface) makes monitoring really easy and possible without visiting a site on daily basis.

Humidity values in 3D -BIM model makes noticeable that one panel might have damp.

In the case of concrete structures, it is crucial that the relative humidity of a concrete slab is low enough before progressing to the finalising phase. In addition to the humidity sensors, the addition of temperature sensors would be beneficial in monitoring the conditions and optimising the temperature and ventilation for the drying of concrete. With this method it might not be possible to replace the current measurements based on test pieces. But at least the sensor measurements would give a possibility to react to unwanted changes in drying conditions and to prevent wasting several weeks of drying time due to poor drying conditions. Consequently, optimal drying process, better quality, and less delays on construction would be achieved.


Could the floor be surfaced earlier if drying condition were measured in real time.

In 4APIs project demo we have integrated 3D -BIM model and IoT sensor data via APIs. Our case building was Ypsilon Community Centre / School in Yli-Maaria located in Turku, Finland. In this demo case, the sensors were installed onto a finished building and the sensors measured temperatures, relative humidity, and in and out air flow in real-time.

Interestingly, the demo case raised questions and some discussion about whether real-time is actually real-time. Like Teemu Mikkonen wrote in his blog “Real-time data with cloud platform” there were noticeable delays in data from sensor to actual services. However, in the case of moisture risk management during construction, the delay may be several minutes and it still does not cause problems, as the time scale, for example for concrete slab drying, is weeks. Even with delays the benefits are remarkable, since unsuitable changes in drying conditions or moisture in structures can be noticed almost in real-time. This gives possibility for constructors and supervising authorities to react to humidity related problems earlier.

In new buildings sensors may be added in the early phases of construction. In such cases some of the construction time sensors would also be utilisable later in use time monitoring. As a result, a complete measurement history of the building could be collected and utilised in indoor air quality investigations.