Democratic Design Thinking

What I learned the most from Sweden was Perfection: the concept of perfection in designing and building products. In this article, I will share what I learned from IKEA and how we can bring these concepts to the software world while designing products that not only adding real value but also cost effective. Leading by design inspired me to develop a new concept of quantifying the cost/need ratio of any product we are building, and constructing the circle of feedback early on in the design process before prototyping.

I for Ingvar, K for Kamprad, E for Elmtaryd, A for Aggunaryd – IKEA

Leading by design by Bertil Torekull is one of the best read I had in 2022.

The philosophy started from a trip to Milan where Ingvar saw the difference between the exhibited luxury and the furnishings that ordinary people could afford in their homes. He asked himself:

  • Why do poor people have to put up with such ugly things?
  • Was it necessary that what was beautiful could be bought only by the elite for large sums?

He went back home with those questions ringing on his head. They were to go on demanding an answer from me all my life. Democracy as an instrument for evolution can lead to remarkable inertia.

  • Solving the question of How in the simplest and cheapest way to convey goods from the factory to the customers
  • Giving names to furniture instead of serial numbers
  • Mail order and furniture store at one
  • Only those who are asleep make no mistakes
  • An even greater joy is having business ideas and convincing others that they are possible to achieve
  • A kind of keenness not to miss a chance
  • Companies are People, Ideas, Cultures and History
  • Cost Awareness, The majority of people, the dream of a good capitalist, hard work raised to the highest morality.
  • Taking responsibility is a privilege: The fear of making mistakes is the root of bureaucracy and the enemy of development. 

Democratic Design

  • Swedish Design: Bright, Light and Functional
  • All designs learn from other designs
  • The innovation of self assembly, saving a great deal of money in the factories and in the transport
  • Gillis: What a lot of space it takes up, let’s take the legs off and put them under the tabletop. The one fine day, they had the first flat parcel and thus they started a revolution. Max was the name of the first assembled table.
  • Reality forced the innovation upon us. We had begun to experience a worrisomely high percentage of damaged furniture in transport (Breaking table legs, etc)
  • Then it was another revolution – the way of treating the surface of the wood

The concept

We are a concept company – Ingvar said:

The spirit primary stands for the concept.

  • Just inside the entrance, there should be the living room, and in the living room, they begin by deciding on the most important piece of furniture of all, the sofa 
  • After the sofa, they usually go for the carpet, the table, then they buy the chair, then a bookcase of shelving
  • After that, the kitchen, bedroom and so on

Product Names

Computer people wanted numbers; Ingvar fought for names which he usually thought up with the designer.

  • Suites, Sofas and Chairs were to have city names
  • Bookcases were to have boy’s names
  • Curtains girl’s names
  • Duvets bridge names
  • An armchair was called Stabil and its certainly was stable

Products like Aveny, Billy, Morot, Moppe and Sultan were hits.

The same names are used all over the world using the Swedish names.

The Business model

  • The hugging management – Hug each other, if you like each other you work well together. Hugging is both free and cost effective.
  • The ideological Think Tank of the empire. Each store, each concept purchaser and franchise pays 3% of Turnover to Inter Ikea systems BV
  • The sacred concept: Waste is a deadly sin, Simplicity is a virtue, Doing it a different way, concentration is important to our success, we can’t do everything, everywhere, all at the same time.
  • Selling hotdogs for 5 SK.
  • Down to earth democracy
  • To manage all of it, you have to know the details, that’s my philosophy
  • Tax is a cost, corporate tax is 28%
  • 1% in reduced sales produces 10% in reduced profit.
  • The Head-To-Foot company

IKEA has meant more for the process of democratization than many political measures put together. Part of creating a better everyday life for the many also consists of breaking free from status and convention – becoming freer as human being. We must, however, always bear in mind that freedom implies responsibility, meaning that we must demand much of ourselves. No method is more effective than the good example.

Advertisement
Posted in Uncategorized | Leave a comment

Crypto Algorithms – Hashing, Encryption and Digital Signatures

In this article I will talk about crypto-algorithms (Symmetric and Asymmetric Algorithms) that are the basics for many applications such as Blockchain, Crypto-currencies and Mobile banking. crypto

Starting by symmetric cryptography where a common key is used between the sender and the receiver. The main issue with symmetric crypto is that we are exchanging keys over unsecure channel. Under the symmetric crypto, we have stream ciphers where we encrypt bits individually following modular arithmetic with finite sets using pseudo random generators CPRNG and Linear feedback shift registers LFSRs. we also have block ciphers that encrypt block of bits following a number of permutations and encryption steps with S and E-Boxes using Feistel network. DES is an example of that where the encryption key is 56 bits in length. With brute force attack, DES was easily breakable after the raise of new FPGAs that can perform the 2^56 computations. 3DES was the alternative where the encryption is done with 3 different DES keys. The other alternative was AES where the key length is 128/192/256 and which leverages Galois fields. By far, AES is the most used symmetric crypto cipher today, used in most of the web browsers (https) and banking systems (credit card information).

Key establishment is critical for system security. In DH key exchange for instance, both parties agree on the session key (KAB = αab mod p, where a is the secret key of 1st party and b is the secret key of 2nd party). This could be manipulated through the Man In the Middle attack where an adversary can generate her own private keys and send it to both parties; This attack worked against all public schemes and the solution to that was having a Key distribution centre or Certificate Authority; in this case the public keys of both parties are signed by the private key of the CA.

Unlike symmetric algorithms, asymmetric or public key cryptography requires a computation of 2 keys – public key that is available on the internet and a private key; encryption happens using a public key and decryption happens using private and public keys.  There are 2 main families of public key algorithms – the integer factorization family which includes RSA algorithm that can be used as a key exchange, encryption or digital signature algorithm and the discrete logarithm families which includes Diffie Hellman(used mainly as key exchange) , elgamal (used mainly for encryption) and Elliptic curves (used as a key exchange ECDH or as digital signature ECDSA).  RSA Encryption (1024-2048 bit length) which utilizes the Extended Euclidean algorithms gcd and Euler’s ϕ(n) to choose a public key and to compute a private key proved to be secure (2^1024 computation is not an easy task to do). Diffie Hellman key exchange (1024-2048) is another application for asymmetric crypto which utilizes the cyclic group Zp* of integers and requires someone to solve the discrete logarithm problem (log2^1024mod1024) to be broken. Elliptic curve digital signatures (160bits) is the 3rd application which utilizes a cyclic group Zp of (x,y) coordinates in the curve and requires someone to solve the general discrete logarithm which is an extremely hard problem to solve; this is one of the main digital signatures in use today used in e-commerce where you sign a transaction with your private key Kpr and the e-commerce site verifies the transaction using the public key Kpub.

One more restriction we have for digital signatures is the message length of maximum 256Bytes so if you have a long message or a file of big size, we need a hash function to compress the file. Hash functions are auxiliary functions in cryptography used for Digital signatures, MACs, Key derivations etc; the main requirement of hash functions (ex: SHA256) is collision resistance, so in the case of ECDSA, the output bits are 160 so you need 2^((n+1)/2)ln(sqrt(1/(1-  λ)) where  λ is likelihood of the collision; Doing the math, you need 2^81 values to find a collision.

Reference: www.crypto-textbook.com

Posted in Uncategorized | Leave a comment

Software Defined Network – What does it really mean?

network2In this article, I will present the software network architecture and will deep dive into the concept of virtual networks and the network hypervisor. Before we go there, let’s review some history or sequence of events that drove the evolution of network protocols over the years.

In the early days, the network broadcast domain became very big and the ARP flooding in the network became unmanageable so we broke the network into multiple broadcast domains or VLANs, then we needed the VLAN to span multiple switches so we encoded the VLAN number in L2 ethernet via dot1q trunk encapsulation, then we needed redundant links in the network but we need to prevent looping and that’s what spanning tree protocol solved at L2.

IP addressing (N.H.H.H) added another problem where we would have low number of networks that support many hosts so the solution was sub-netting so I can take a 10.0.0.8/8 network and subnet it into 2 networks (10.0.0.0/9 and 10.128.0.0/9). With classful routing protocols such as RIPv1, routers don’t advertise subnet masks and that’s what classless routing protocols such as RIPv2, EIGRP and OSPF provided; then we wanted to create a tunnel between 2 autonomous systems over MPLS circuit and here the label routing solved big problem in the forwarding table by injecting labels between L2 and L3.

MPLS by itself is plain, the most important are the services on top such as MPLS VPN, traffic engineering & QOS. With MPLS VPN, the PE router have VRF per customer and this offers multi-tenancy or multiple VLANs on the same MPLS network i.e. the router in this case have global routing table as well as VRF tables (one per customer) and each VRF has its own CEF table; MPLS added overhead to the packet (each label with 32 bits) and it became the main requirement to support Jumbo frames of more than 1500MTU. Since the PE router connects multiple customers and each customer with different VRF table, we ran into BGP routing problem (for example, if one customer is using the same IP as another customer) and here the Route Distinguisher RD solved the problem and added 64 bits to the packets. And finally, with the shortage of IPv4 addresses, we needed to adopt IPv6 in service provider network and so we added extension to the TCP/IP software stack.

So what’s the problem with networking today! Why are we moving to Software defined networking?

1st we need to get out of the closed box in networking, we need to create an operating system for networking with the resource management scheduling, all the nice abstraction APIs, user groups, permissions and multiple administrative domains. The OS have processes that are built on Linux name spaces and the main problem with Linux container architecture is the application packaging which requires more work to be ready for the network evolution.

2nd, many of the network functions like SSL, TLS, LB, etc. are composed of multiple processes so we need to have a network hypervisor that can not only support that but also be aware of the multiple slices in the network and control what kind of information can be moved between 1 slice and another.

3rd with the hybrid cloud model, confidentiality and integrity become more challenging. You’d have 2 apps for example, one is written in Java and another one is written in Erlang, that share the same network. For confidentiality, we need to secure that one app don’t leak packets to another app ; and for integrity, we need to make sure that one application can’t generate packets that would impact the network behavior.

4th we need to solve the traffic engineering problem. The protocols used today work just fine; In the WAN for example, BGP works perfectly fine. The problem raises when you start doing traffic engineering using MPLS and here SDN perhaps solve the problem by applying logic that affects forwarding so the SDN controller does the TE and tells the switches what should be the TE FIB.

5th with the proliferation of cloud applications, we’ll have islands of SDNs where each island has different performance and security requirements. For this case, we’ll have different SDN controllers & accordingly will need a controller of controllers and here’s where SDN federation comes to the picture.

Posted in Uncategorized | Leave a comment

The data processing Evolution from Hadoop & Spark to Apache Storm

The data architectureUnstructured and Distributed data sets are becoming a norm in the new data centric world. Petabytes of data are processed every single day in the world wide web. First, we need to convert these unstructured data with billions of transactions into knowledge. Second, we need a fast data processing model; Hadoop/MapReduce was the main platform for the last couple of years but there was a big demand for faster processing and that’s what Apache Spark brought on top of Hadoop. Third, we need to analyze big data on real time and that’s what Apache Storm brought to the architecture.

MapReduce came out of functional programming ways of thinking. In the Hadoop MapReduce framework, datasets are divided into pieces called chunks; you apply MAP function to the chunks and create intermediate key value pairs, the framework will then group the intermediate values by Key and pass them to a Reduce function invocations that create an output value. Hadoop has the concept of Job tracker, a master node that coordinates everything in the Hadoop cluster. When a client submits a job, job tracker breaks it into chunks and assign work to task trackers, and then apply map() function, reduce() function and store the output data on HDFS. But Hadoop Job tracker was a barrier for scaling and that’s what YARN, Yet another resource negotiator, provided on top of Hadoop. The YARN resource manager replaced the resource management service of the Job tracker in Hadoop. In YARN, the Application Master determines the number of Map and Reduce tasks while the resource manager schedule jobs in the node manager.

Spark, on the other hand, is significantly faster and easier to program than MapReduce. Apache Spark extends the MapReduce model to better support iterative algorithms and interactive data mining. Spark has the notion of resilient distributed datasets RDDs which means that the data stays in memory and we can do multiple iterations on the data sets. But if you’re dealing with huge amount of data that don’t fit within the RAMs you have in the cluster, then Spark will not be able to process these data and we have to go back to Hadoop. From design and architecture perspective, it’s very critical to have the Spark-Hadoop integration in place before moving the data processing to Spark.

Moreover, we’d like to process that big data within few seconds and convert these data into knowledge very quickly. Apache Storm is the solution for real time data processing. The concept of Storm is that you have tuples which are list of key value pairs, streams which are sequence of tuples, Spout is the entity that generates these tuples from the datasets, Bolts is the entity that process these data streams and topology which is a directed graph of Spout and Bolts. From architecture perspective, Storm has a master node that runs a daemon called Nimbus, a worker node that runs a daemon called supervisor, and Zookeeper that coordinates Nimbus and Supervisor communication and keep up the consistency. Nimbus instructs supervisor to run workers, worker daemons run executors and executors run user tasks. Regarding processing guarantees, Storm utilizes a tuple tree mechanism using anchoring and spout replay to provide at least one processing guarantee. If you want only one guarantee; in this case you need to know about the states of the topology and that’s provided by Trident which is built on Storm with connectors to HBase NoSQL data Store.

In summary, both accuracy and real time are important and so we need to integrate both worlds of data processing; Hadoop for batch processing of data at scale and Storm graph of Spouts and Bolts for real time processing.

Posted in Big Data, Cloud, Data Streaming | Leave a comment

The CAP Theorem – Is it 100% Correct?

CAPIn the last 5 years, we’ve seen a proliferation of data far beyond expectations; these data are unstructured and distributed among multiple servers. For this new large and unstructured workload, it’s hard to come up with schema and to scale the system without impacting the performance; and yet, new generation of storage systems “Key Value Store” had to replace the decade-long relational database systems. Moving from SQL Row-oriented storage to NoSQL Column-oriented storage made it much faster to query these unstructured data sets with less overhead.

With distributed systems, there are 3 common properties we want to achieve: Consistency, Availability and Partition Tolerance. Consistency means that “Even-though there are multiple clients that are reading/writing data, all clients will see the same data at any given time”. Availability means thatFor every read/write request, you get a quick response”. Partition Tolerance means that “When the network is partitioned into 2 parts that don’t talk to each other, the system continues to work”

CAP Theorem is generally described as the following: when you build a distributed system, you can only choose two of the three desirable properties: Consistency, availability, and partition Tolerance. Is this theory 100% correct? And do we always need to consider the CAP theorem as the building block for designing distributed systems?

Whether we are doing network partitioning within the data center or across data centers, we still desire the system to continue functioning normally if the internet gets disconnected, DNS not replying or TOR switch set for maintenance. So Partition Tolerance is essential for cloud computing; if we want to follow the CAP theorem, then the system has to choose between consistency and availability. But can’t we achieve both to a certain extent? Can’t we have eventual consistency with Always-Available system or Full consistency with acceptable level of availability?

If you examine Cassandra Key value store through the lens of the CAP theorem, Cassandra NoSQL chose availability over consistency. But you can also achieve consistency by designing an artistic replication strategy for multiple data center deployments (number of Replicas per Data center for each key). In Cassandra, a client sends a read/write request to the coordinator node in the Cassandra cluster, the coordinator uses partitioning to send query to all replica nodes. If any replica is down, the coordinator writes to all other replicas and keeps the write locally until the down replica comes up again. If all replicas are down, the coordinator buffers writes for few hours. So Given a key, suppose all writes for that key stopped then all the replicas will converge eventually to the latest write. Moreover, there are levels of consistency in Cassandra; normally we use Quorum which provides acceptable level of consistency with fast query response time but you can also use “ALL” consistency level which ensures strong consistency but of course with slower response time; So here is another trade off we need to make between consistency and latency.

HBase on the other side chose consistency over availability. HBase is a distributed database built on top of HDFS, consists of several tables, each table consists of a set of column families. In HBase, datasets are divided into chunks (HFiles) and a collection of HFiles form the HRegion, the HBase Master node assigns HFiles to HRegion Servers which is the daemon program that runs on each node in the cluster. Zookeeper synchronizes between tasks and guarantee consistency. The In-Memory representation is the magic of HBase; so for a write operation for instance, we write the operation in Append-only log in HDFS and then go and change the record in Memstore. The Append only log guarantees storage consistency; if the node fails we can replay this log.

To conclude, the CAP theorem is not 100% accurate and we are able to reach acceptable level of both consistency and availability by proper distribution of workload across the NoSQL cluster.

Posted in Analytics, Big Data | Leave a comment

Cloud Security – Lock all the doors of your cloud architecture

Einstein once said thaCloud Security Blogt “we can’t solve problems by using the same kind of thinking we used when we created them”. This quote works for many cases in life but it doesn’t really work if you want to solve the security threat behind cloud computing. So you have to think first of how would you attack what you have and use a reversal techniques to protect yourself.

So how the bad guys think? The criminals may be looking for a memory corruption bugs in the web server for instance, tries to figure out what IP addresses are used and looks for a service that is unpatched to break in. Once they break in to the web server, they can easily find their way to the Database and Application servers. Criminals can also send a file in a hope that someone could open this file, if this happens they can take-over the machine, establish interactive control, scan the network, escalate the user privilege in the active directory to become admin users, take over the machine and get the data they want. The question that asks itself here, how can they connect to the servers with all the firewalls and protection in place? They use Divert sockets to match traffic with the firewall rules and divert it into a piece of code, and then they look for SYN packets going to a server and alter the traffic using the generated code. They do the reverse coding on the remote site so that the hack is not captured by the security log files. Attacker also tries to create a huge amount of connections in the SYN received state until the backlog queue is overflowed so that the host kernel memory becomes exhausted. Moreover, the attacker can spoof their source IP addresses make it harder to trace the source of the attack. To add complexity, the attacker can use a distributed attack taking advantage of many drone machines in the internet. It is very important as well to know when you are under the attack, who is attacking you, what is the target and how can you stop it. Now we know the different ways of attacks, let’s design the best 5 lockers to close all the borders of our cloud architecture.

Lock#1 – Physical: Starting by the physical layer, there is a big demand in the market to use COTS hardware. From security perspective, although we are abstracting the software from the hardware, we do need to protect our hardware to better protect the software layers on top. The basic tip here, use built-in security in the infrastructure layer with hardware based root of trust and policy engines that checks hypervisor integrity and allow/deny work load migration from one host to another based on trust security profiles.

Lock#2 – Virtual Machines: On the virtualization layer, there are many cybersecurity attacks where malware creators inject viruses into the Virtual machines; these viruses are mainly written in open source programming language such as python and others. So if they manage to get into the VM, they can eavesdrop on all the traffic running within a specific tenant. VM environment is not static, VMs can be moved from one host to another, and when you move a VM from one host to another the network policies will go with them, so we need to protect the VMs during this transition. The virtual firewalls will prevent your virtual environment from external attacks but if the criminals manage to pass the Firewall, then the anti-malware Intrusion detection software should signal an alarm while the Intrusion prevention software should try to prevent or block it. From capacity dimensioning perspective, we don’t need anti-malware software and signature database on each VM as this will reduce the host performance and accordingly the number of VMs supported in a host.

Lock#3 – Hypervisor: In addition to tenant isolation where the hypervisor isolate guests so that each guest has only access to its own resources, we need to have a software firewall that protects the host itself and here we need to follow the attacker’s logic. So mainly the attacker would be looking into your operating system backlog size, how long are the TCBs kept in SYN-Received before timing out. The attacker can send a number of SYNs exactly equal to the backlog, and repeat this process again and again. The Solution for such “SYN DOS” attacks is proper filtering (Ingress/Egress) of traffic through firewalls and automatically dropping packets that are not explicitly allowed by policy, using IPSec to defend against the distributed spoofed packets, plus tuning the TCP/IP stack in the operating system. We also want to secure the interaction with the hypervisor by creating local users with admin rights other than the Root credentials. Moreover, when you move your compute, storage and networking services into the cloud, you want to centralize and secure the user authentication of these services. Attackers might bridge in and replace your LDAP or Radius server that you are using for authentication, so to mitigate the risk, review carefully your authentication process and make sure you have TLS and SSL in place and they are working properly.

Lock#4 – Domain Name System: Without DNS, there is no IP to domain mapping and accordingly there is no access to the outside world, so protecting your DNS is a key for deploying network security. Consider deploying multi-layer defense to protect against DDOS, DNS stateful firewall to protect against poison pill and DNSSEC for Cache poisoning. From design perspective, make sure to have geographical redundancy of your DNS servers, enable load balancing between your DNS servers, dimension for the future capacity for both throughput and response time and have your system ready to scale up in case of unexpected load increase.

Lock#5 – Application Programming Interface: We are moving into an open environment where any application (Telco or IT) would run on any virtual layer, any hardware and integrated with any other existing cloud based or legacy based applications, and here the API security comes into the picture. Every API is fundamentally different; whatever the protocol we are using, whether it’s REST/http, SOAP/XML or other, we need a trusted environment from the service layer up to the abstraction and orchestration layer. From API security design perspective, use SSL to protect the API end points, implement gateways that enforces security policies in front of these service APIs to protect them from DOS attacks, apply rate limiting, enforce federation relationships, and use service authentication & authorization based on orchestrated just-on-time data which would inform about who is accessing these APIs and under what conditions.

Finally, with all the above security lockers, we still need to have some kind of virtual robots or honeypots to know the frequency of attacks and how they vary geographically. The algorithm in the honeypots provides detail on the IP addresses of the source location, determine attack vectors, identify malwares and do reverse engineering. To provide early warning for new malwares, it’s very critical to design the right integration points between these honeypots and analytics and between your analytics platform and the cloud management system.

Posted in Uncategorized | Leave a comment

Data Center Architecture

Data CenterThe architecture design of cloud data centers is an art by itself. It’s very similar to composing a new musical notation not only with melody, harmony and rhythm but also with the vision to scale for the notes used. When we say cloud data center, it’s mainly a number of virtual Data centers that sits on top of physical infrastructure in one or multiple locations. Here are my five main rules for designing a next generation cloud data centers.

Rule#1 – Know the context of your applications and their related SLAs: Business Applications, PCI apps, web apps; applications written in Java, C, PHP, Python or Ruby might have different requirements so knowing your application requirements and how they look like on the wire are the basic starting points for designing your virtual cloud data center. Mission critical applications require .99999 availability or 5 minutes downtime a year and even .999999 or 30 secs of downtime a year; so make sure that you meet these SLAs in moving your applications to the cloud.

Rule2 – Carefully design your IP Backbone Network: First, Know the number of IP addresses required, pick a unique IP range that is not used somewhere else and don’t conflict with existing IP ranges. Second, create public and private subnets and divide it by functions such as Application subnet, Database subnet, cloud management and analytics subnet, etc. Third, define your routing tables for your public subnets towards your public gateway and for your private subnets towards your VPN gateway. Fourth, Define your virtual NAT Network address translation for instances that run on private subnets and finally define your VLAN and VLAN tags to route traffic between your virtual data centers on the same fiber connection.

Rule #3 – Designing a platform for today with the capability to scale for tomorrow: This can be achieved by having an auto scaling system with monitoring capability that can keep an eye on your virtual environment and can automatically scale up and down based on load requirements. With auto scaling, you don’t need to order anything, you just call the APIs. During peak hours, the monitoring system can call for additional computing resources while contracting resources during low traffic periods. The basic tip here is to define the right set of rules that adjust the min and max number of servers based on either schedules or CPU utilization.

Rule #4: Build a highly available virtual Data Center across the stack from the Database tier up to the application tier. In addition to designing a redundancy on the infrastructure (both server and link redundancy), Define a master and slave DB with synchronous replication between master and slave so that your applications will keep running if you lose your master keeping the time to promote the master to slave and spinning a new VM as low as possible. On the networking side, make sure that your virtual routers and switches are redundant and also many many applications depend on NAT Network address translation, so make sure you design a highly available NAT from the early beginning. On the application tier, replicate your applications on multiple virtual Data centers, make it highly available and eliminate any single point of failure in your cloud platform. Also, do stress testing for your applications and APIs with thousands of concurrent connections before moving your apps into the cloud.

Rule #5 – Security: Look at the compliance, regulatory and data privacy requirements in the countries where you want to launch your data centers. On the networking layer, define your virtual Firewalls; apply Access security rules to your subnets and inbound and outbound policies to your virtual instances. On the management layer, Use Identity and Access Management IAM to allow the right people to make changes to your virtual Data center configuration.

Posted in Cloud, Data Center | Leave a comment

Analytics

Big DataBIG DATA – what does the data say? Where did the data come from? What kind of analysis is used?

Collecting and Analysing data before taking decisions is not a new discovery. Business Intelligence gave organizations a new dimension that goes beyond intuition when making decisions. Data about product development, sales, business processes and customer experience were recorded, aggregated and analysed. Data warehouses were used to collect information and business intelligence software was used to query the data and report it.

Data volume grew rapidly which made it hard for Business intelligence companies to segregate it in warehouses and from here the concept of Big data and Analytics blew up, where new technologies were created. Big Data couldn’t fit or be analysed fast enough on a single server so it was processed with Hadoop, an open source software framework for large scale processing of data across several servers. The data itself was stored in public or private cloud and it was unstructured, so many analytic firms turned to NoSQL database.

The 2.0 version of analytics was perfect in terms of improving the internal business decisions but then it was realized that there is a big business opportunity behind that and we’re not talking only about improving the decision making process but also creating new products and services, and here came the 3.0 version where new agile analytical methods and machine learning techniques have been used to generate insights at a much faster rate.

A huge data is being processed every day on the aviation sector, around 16 Petabytes (16*10^15 Bytes) according to NASA Researches news. The importance of big data is not just a result of its volume and speed, but also the reality that data comes from trusted source. So, it’s not only having the big data that matters, its more about asking the right questions out of this big data: What does the data say? Where did the data come from? What kind of analysis is used?, etc.

18-December-2010, I would never forget this date where European Airports closed for 3 days due to heavy snow and freezing temperatures causing travel chaos across Europe; People slept for more than 48 hours in Airports. Seconds matter in airports, so using big data efficiently leads to better predictions, and better predictions yield to better decisions. What really matters is managing the right data from these Big data, so just imagine connecting all the airplanes in the global sky, feeding the data to the analytics system in the airports, and creating more valuable products and services by integrating this analytics platform with other data governance systems. If we had this system, do you think that the MH 370 flight would have been lost in the Indian ocean without any critical facts of what happened.

Posted in Analytics, Big Data | Leave a comment

5G – Unlimited Data Rates and Low Latency

Image

When we started building 2.5G and 3G Networks, the aim was to provide voice, sms and some data services with accepted user experience and quality of service; The blended ARPU for Telcos was mainly generated from voice services. The technology shift in the last 3 years opened the door for new business opportunities and set a new challenge for Telcos to capture some of the revenues taken by OTT players. The voice ARPU is declining year over year and new services such as video streaming, mobile TV, mobile payment, E-commerce, E-health, connected cars, smart metering and cloud services are expected to take over 40% of the Telcos revenue streams. To provide these services, the end to end latency need to be reduced and the network reliability need to be improved in order to support the multi-Gbps data rates. An evolved version of LTE and HSPA is required to satisfy the requirements of new types of connections, it is the 5G that relies on smart antennas together with new or evolved RAT where the end users can benefit from the unlimited data rates and low latency.

Posted in Uncategorized | Leave a comment

Monetary Policy – ECB and FED

Monetary PolicyECB, FED and Central Bank Of Ireland

How to measure money, The Monetary Base and central Banks balance sheet, Transmission Mechanisms of Monetary Policy, Monetay Policy Strategy and conduct, and Quantitative Easing.

In 1913, US federal reserves had 2 mandates:
• Price Stability or Low Inflation
• Low Unemployment
In EU, there was one mandate which is Price Stability. Mastricht Treety mandate was to have inflation rate below 2% (Low Inflation rate means high growth)

Money Theory: % change in money is partly affected by % change in prices and party by % change in GDP. For Industrial countries, Increase money by 5% could cause inflation to increase by 2% and output GDP to increase by 3%.

Central Banks conduct strategy argument that start with money and end with output (GDP). In between, there are Interest Rates, Inflation, Monetary Aggregates, etc. The two main sources for Monetary Policies are the European Central Bank ECB (www.ecb.int) and the US Federal reserve system FED (www.federalreserve.gov). The central banks main role is to secure financial stability.

Central Bank of Ireland follows the ECB Monetary policies and mandates. The ECB’s and Bank Of Ireland definition of monetary aggregates are based on a harmonised definition of the money-issuing sector and the money-holding sector as well as of harmonised categories of MFI liabilities.
M1: Currency in circulation + overnight deposits
M2: M1 + deposits with an agreed maturity up to 2 years + deposits redeemable at a period of notice up to 3 months.
M3: M2 + repurchase agreements + money market fund (MMF) shares/units + debt securities up to 2 years

Ireland made many policy mistakes and is paying a high price for correcting them. The corrections are painful yet unavoidable. The country has received unprecedented support from ECB to get out of the financial crisis that started in 2008. The Irish Economy is growing at the moment and inflation rates started to improve. My own view to the Irish economy to come out of the financial crisis is by investing in long term businesses that rise up the employment rate and reduce the inflation figures for the coming years.

Posted in Bank Of Ireland, ECB, Economics, FED, Monetary Policy | Leave a comment