Posts

Showing posts from February, 2018

Abstract Factory Pattern

Image
Name:-  Abstract Factory Pattern

Design pattern,,Components to describe it:::

Those are A design pattern describes a problem which occurs repeatedly in an environment and then describes the core of the solution to that problem. Describing a design pattern :: we describe design pattern using consistent format , each pattern is divided in to sections, according to the following template. 1.Pattern name 2.Intent means what pattern do in short note. 3.Also known as 4.Motivation 5.Applicability 6.Structure 7.Participants 8.collaborations 9.Implementations 10.Sample code 11.Known Uses 12.Related Patterns

A systematic approach makes a programmatic reuse works

A systematic approach makes a programmatic reuse works: obstacles in Reuse process:: 1.Engineering 2.Process 3.Organizational 4.Business Oriented 1.Engineering:: Identify basic elements of model For Ex: Bank application::requirement tools identification reduces programmer to work on application area rather than design area. 2.Process:: continuation still there

software reuse success factors:

1.Software Reuse Is simple idea: -Develop system of components of a reusable size which satisfies maximum functionality in that area and reuse them. Reusing an existing components -It should be applicable for all phases of software development life cycle. not only code should be reused its requirements,analysis,design and testing should be reused for effective software product. -developers can solve problem solving effort in the complete development life cycle. no redundancy for programmer.Reliability increases, time saves,testing time saves.

let me explain basic design pattern needs :

Basic design pattern needs: 1.Software Engineering 2UML(Unified Modeling Language) 3.Object Oriented 4.Usecase Components

Kafka: Streaming Architecture

Image
Kafka: Streaming Architecture Kafka gets used most often for real-time streaming of data into other systems. Kafka is a middle layer to decouple your real-time data pipelines. Kafka core is not good for direct computations such as data aggregations, or CEP.  Kafka Streaming  which is part of the Kafka ecosystem does provide the ability to do real-time analytics. Kafka can be used to feed fast lane systems (real-time, and operational data systems) like Storm, Flink, Spark Streaming and your services and CEP systems. Kafka is also used to stream data for batch data analysis. Kafka feeds Hadoop. It streams data into your BigData platform or into RDBMS, Cassandra, Spark, or even S3 for some future data analysis. These data stores often support data analysis, reporting, data science crunching, compliance auditing, and backups. Kafka Streaming Architecture Diagram

Who uses Kafka?

Who uses Kafka? A lot of large companies who handle a lot of data use Kafka. LinkedIn, where it originated, uses it to track activity data and operational metrics. Twitter uses it as part of Storm to provide a stream processing infrastructure. Square uses Kafka as a bus to move all system events to various Square data centers (logs, custom events, metrics, and so on), outputs to Splunk, Graphite (dashboards), and to implement an Esper-like/CEPalerting systems. It gets used by other companies too like Spotify, Uber, Tumbler, Goldman Sachs, PayPal, Box, Cisco, CloudFlare, NetFlix, and much more.

Kafka use cases

Kafka use cases In short, Kafka gets used for stream processing, website activity tracking, metrics collection and monitoring, log aggregation, real-time analytics, CEP, ingesting data into Spark, ingesting data into Hadoop, CQRS, replay messages, error recovery, and guaranteed distributed commit log for in-memory computing (microservices).

Why Kafka?

Why Kafka? Kafka often gets used in the real-time streaming data architectures to provide real-time analytics. Since Kafka is a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system, Kafka is used in use cases where JMS, RabbitMQ, and AMQP may not even be considered due to volume and responsiveness. Kafka has higher throughput, reliability and replication characteristics which make it applicable for things like tracking service calls (tracks every call) or track IOT sensors data where a traditional MOM might not be considered. kafka can works with Flume/Flafka, spark streaming Storm, HBase, Flink and Spark for real-time ingesting, analysis and processing of streaming data. Kafka is a data stream used to feed Hadoop BigData lakes. Kafka brokers support massive message streams for low-latency follow-up analysis in Hadoop or Spark. Also, kafka streaming (a subproject) can be used for real-time analytics.