Presto in the point of my use

use reason:

Reasons for use:

Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

Presto introduction:

Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

Working principle:

The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

Use the scene:

1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

Using Sidecar to introduce Node.js into Spring Cloud

Using Sidecar to introduce Node.js into Spring Cloud

theory

brief introduction

Spring Cloud is a popular micro service solution at present. It combines the convenient development of Spring Boot with the rich solution of Netflix OSS. As we all know, Spring Cloud is different from Dubbo and uses Rest services based on HTTP (s) to build the whole service system.

Is it possible to develop some Rest services using some non JVM languages, such as Node.js, which we are familiar with? Yes, of course. However, if only Rest services are available, it is not possible to access the Spring Cloud system. We also want to use the Eureka provided by Spring Cloud for service discovery, use Config Server to do configuration management, and use Ribbon to do client load balancing. At this point, Spring sidecar will be able to show its talents.

Sidecar originated from Netflix Prana. He provides a HTTP API that allows access to all instances of established services, such as host, ports, etc. You can also use an embedded Zuul proxy service to get the relevant routing nodes from Eureka. Spring Cloud Config Server can be accessed directly through the host or through proxy Zuul.

Netflix Prana

What you need to be aware of is the Node.js application you have developed, and you have to implement a health check interface to allow Sidecar to report the health of this service instance to Eureka.

In order to use Sidecar, you can create a Spring Boot program with @EnableSidecar annotation. Let’s look at what this annotation has done.

@EnableSidecar

@EnableCircuitBreaker

Look, hystrix fuse, Eureka service discovery, zuul agent, all of these components have been opened.

Health examination

Next, we need to add the configuration of sidecar.port and sidecar.health-uri in application.yml. The sidecar.port attribute represents the port of the Node.js application listener. This is to enable sidecar to register in Eureka services. sidecar.health-uri is a URI that simulates the interface of Spring Boot application health indicators. It must return the following form of JSON document: health-uri-document

Sidecar.port

Sidecar.health-uri

Sidecar.port

Sidecar.health-uri

Health-uri-document

{

The application.yml of the entire Sidecar application is as follows: application.yml

Application.yml

Application.yml

Server:

Service access

After building this application, you can use the /hosts/{serviceId} API to get the result of DiscoveryClient.getInstances () . Here is an example of returning two instances of information from different /hosts/customers from host. If sidebar runs on the 5678 port, then the Node.js application can access the API via the http://localhost:5678/hosts/{serviceId}.

/hosts/{serviceId}

DiscoveryClient.getInstances ()

/hosts/customers

Http://localhost:5678/hosts/{serviceId}

/hosts/customers

[

Zuul proxy can automatically be registered to the Eureka association to /< serviceId> services add routing, so the customer service can be accessed via the /customers URI. It is also assumed that sidecar is listening on the 5678 port, so our Node.js application can access the customer service by http://localhost:5678/customers.

/< serviceId>

/customers

Http://localhost:5678/customers

Config Server

If we use the Config Server service and register it to Eureka, Node.js application can access it through Zull Proxy. If ConfigServer’s serviceId is configserver and Sidecar listens on the 5678 port, then it can be accessed through the

Configserver

Http://localhost:5678/configserver

Node.js applications can also use the capabilities of Config Server to get some configuration documents, such as YAML format. For example, a access to http://sidecar.local.spring.io:5678/configserver/default-master.yml may get the following return of the YAML document:

Http://sidecar.local.spring.io:5678/configserver/default-master.yml

Eureka:

So the whole architecture of Node.js application accessing to Spring Cloud micro service cluster through Sidecar is roughly shown as follows:

 

Demo practice

Let’s suppose that there is such a very simple data. It is called User:

Class User {

It looks very classic, Kazakhstan!

Another data structure is used to represent books, Book:

Class Book {

The authorId in Book corresponds to the ID of User. Now we need to develop Rest services for these two data.

First, User, we use spring to develop, first in the controller construction method, mock some false data users, and then a very simple Get interface based on the ID user.

@GetMapping (“/{id}”)

After starting, we curl visited:

Curl localhost:8720/12

Next, we use Node.js to develop Book related interfaces.

Because the Node.js community is very active, the optional Rest service framework is very large. The mainstream is express, koa, hapi, and so on, very light and easy to extend like connect. Here I consider the mass base and document richness, and choose to use to develop such a Rest service that can access Spring Cloud.

Express

KOA

Hapi

Connect

Express

Const express = require (‘express’)

It is also first to use faker to the mock100 bar data, and then write a simple get route.

Faker

After startup, we use browser to access http://localhost:3000/book/1.

Http://localhost:3000/book/1

Using Sidecar to introduce Node.js into Spring Cloud

 

Now that we have two micro services, next we launch a Sidecar instance to connect Node.js to Spring Cloud.

@SpringBootApplication

Very simple, it needs to be noted that before this, you need a eureka-server to test the ability of the sidecar agent to access Spring Config, and I also use config-server, believing that students who are familiar with spring cloud should know.

In the configuration of sidecar, bootstrap.yaml simply specifies the address of the service port and the config-server, and the node-sidecar.yaml configuration is as follows:

Node-sidecar.yaml

Eureka:

The address of the node.js service directed by sidecar is specified here, and hystrix.command.default.execution.timeout.enabled: false is mainly because sidecar uses hystrix’s default timeout fuse for a second, and the speed of domestic access to GitHub, as you know, I often visit config-server when I test the test, so I often go out of time, so I dropped it with disable, and you could choose to extend the overtime time.

Hystrix.command.default.execution.timeout.enabled: false

When eureka-server, config-server, user-service, node-sidecar, node-book-service are all started, we open the main page of Eureka

Http://localhost:8700/

Using Sidecar to introduce Node.js into Spring Cloud

 

See that our services are in UP state, indicating that everything is normal. Next, look at the console of the Node.js application:

Using Sidecar to introduce Node.js into Spring Cloud

 

It is found that the traffic has come in and the access interface is /health, which is clearly called node-sidecar’s call to our node application for health checks.

/health

Next is the time to witness miracles. Our curl visits the 8741 port of sidecar:

Curl localhost:8741/user-service/12

Consistent with the results of direct access to user-service, it shows that sidecar Zuul Proxy can proxy our request to user-service services.

Well, with this agent, we hope that book services can provide the interface of author information:

Const SIDECAR = {{

We have access to http://localhost:3000/book/2/author, and you can see the author’s information of bookId to 2. But there is a problem. We do not have access to the Node.js interface by accessing http://localhost:8741/node-sidecar/book/1 as the proxy to user-service, and how to get user-service to get it What about the data? Looking at the first part of the theoretical knowledge, we can access /hosts/< serviceId> to get information about each service, and we’ll try to access http://localhost:8741/hosts/node-sidecar The following results are obtained:

Http://localhost:3000/book/2/author

Http://localhost:8741/node-sidecar/book/1

/hosts/< serviceId>

Http://localhost:8741/hosts/node-sidecar

Using Sidecar to introduce Node.js into Spring Cloud

 

You can see the information in the return information such as the URI of the Node.js application, so is it possible that we can first access the sidecar’s interface, and then get the real URI, and then call book-service’s /books? Uid=< uid> the interface? Of course, in fact, there’s already a tool for spring cloud to do this for us, that is, Feign, new BookFeighClient.java:

/books? Uid=< uid>

Feign

BookFeighClient.java

@FeignClient (name = “node-sidecar”)

FeignClient can automatically find the corresponding service address on the serviceId to Eureka. If there are more than one instance of the service, the client load balance will be used by Ribbon, and a number of RequestMapping – like annotations can be used to keep the client in line with the server controller. By defining this findByUid method, we can easily call the /books? Uid=<, uid> interface defined in the above Node.js. This is also consistent with the sidecar schema we painted above.

FeignClient

RequestMapping

FindByUid

/books? Uid=< uid>

Now, we define a new type of Author in user-service, which inherits from User and adds a books field:

Class Author extends User {

Add an interface to get Author:

@GetMapping (“/author/{id}”)

Logic is also very simple, get the corresponding user, get books from bookFeignClient from uid, and then build author to return.

We visit the http://localhost:8720/author/11 to return the results:

Http://localhost:8720/author/11

Using Sidecar to introduce Node.js into Spring Cloud

 

Well, so far, we have completed the JAVA and Node.js two languages with the help of sidecar and general HTTP protocol to complete the process of calling each other. For more similar configuration information from config-server, access to application information from Eureka and other operations, you can download the source code of my experiment to understand.

I put the whole DEMO in my GitHub, you can directly clone down.

Git clone https://github.com/marshalYuan/spring-cloud-example.git

The whole project is roughly the same:

  • eureka-server / / corresponds to the Eureka Server
  • above.

    Eureka-server / / / / Eureka Server for the above figure

    Config-server / / / / Config Server for the above figure

    SearchPath of config-repo //config-server warehouse address

    The services developed by user-service //java are both service providers (Provider) and consumers (Coustomer).

    Node-sidecar / / a sidecar instance responsible for connecting node and spring-cloud.

    Rest services developed by book-service-by-node //express.js

    You can follow:

    Eureka-server -> config-server -> user-service -> book-service-by-node -> node-sidecar

    In this order, these five applications are started, because demo is used for testing, so I have no bug.

    Write at the end

    As the opening point says, thanks to the general Http protocol and the Netflix rich suite, we can connect many non JVM languages such as Node.js, PHP, and Python to the very mature micro service framework of Spring Cloud to quickly build our micro service business system. You might say why not all use Java? Indeed, the cost of developing and maintaining a single language in a single system is much lower, but there are other situations worth our choosing the sidecar solution.

    For example, the burden of history is too heavy to be cut to the Java platform, but it does not want to rewrite the services of the past, so that it can be integrated at the cost of a unified protocol, from Java to other platforms.

    There is also a saying called “hugging the language dividend”. Choosing a development language means choosing a tool and library corresponding to the programming language. For example, it is now popular to use Python to do data analysis, then the service of this part of the micro service system can be developed with Python; the asynchronous event driver mechanism of Node.js is excellent, and can it be used to develop some services that need to deal with a large number of asynchronous requests; and so on. This is not really triggering the “best language Jihad”. It is thought that the comparison between the advantages and disadvantages of language from the use of scenarios and ecology is to play gangsters. Take all as an example, and I don’t think there is any language that can be understood as simple as the code of Haskell.

    Pythagorean triple

    [(x, y, z) x < – [1..100], y < – [x..100], Z < – Z, + + = = =]

    Besides, our title is Node.js, and the best language is PHP. Escape ~ ~ ~

    Communication between Android components

    Communication between Android components

    First, let’s sort out the ways in which we communicate between different components in Andrew.

    (Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

    Mode 1: use Intent to pass the value: (between Activity and Activity)

    Value example:

    Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

    Value example:

    Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

    Mode two: use Binder to transmit values (between Activity and Service).

    1. define Service

    In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

    Public class MyService extends Service {/ / instantiate the Binder class that you define.

    2.Activity binding Service

    It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

    Public class MyBindingActivity extends Activity {

    Way three: use Broadcast broadcast transmission value

    In fact, it uses Broadcast’s sending and receiving to realize communication.

    Send an instance of Broadcast:

    Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

    Receive the Broadcast instance:

    / / dynamically register broadcast public void registerMessageReceiver () {

    Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

    It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

    Mode five: use the interface:

    It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

    To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

    Mode six: EventBus:

    EventBus class library introduction

    EventBus is an optimized Android system class library in publish / bus mode.

     

    Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

    EventBus uses three steps

    Define events: public class MessageEvent {/ * Additional fields if needed * /}

    Prepare subscribers: eventBus.register (this);

    Public void onEvent (AnyEventType event)} / * Do something * /};

    Post events:

    EventBus.post (event);

    The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

    Http://blog.csdn.net/jdsjlzx/article/details/40856535

    EventBus’s problem?

    Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

    Android Service active attack and defense

    company launched a point in June 15, similar to travel software applications need to upload real-time latitude and longitude, involved in the backstage Service to survive the problem. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    One, why do you want to live?

    The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

    Two. The means of keeping alive

    1. Modify the return value of the onStartCommand method of Service

    Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

    START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

    START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

    START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

    [feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

    2, Service onDestory method restarted

    After onDestory sends a broadcast and receives the broadcast, it restarts the Service.

    @Override

    [feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

    3. Improve the Service priority

    Increase priority in Service registration

    < service android:name= “com.dwd.service.LocationService” android:exported= “false” >

    [feasibility] this method is invalid for Service, and service has no such attribute.

    4. Front desk service

    The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

    NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

    [feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

    5. Process Guardians

    There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

    [feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

    6, monitoring system broadcasting

    By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

    [feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

    7. Interoperability between applications

    Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

    [feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

    8, activity a pixel point

    After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

    [feasibility] will still be killed.

    9. Install APK to /system/app and transform it to system level application.

    [feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

    10, using the account and synchronization mechanism provided by Android system.

    Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

    / / establish account number

    [feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

    11, white list

    Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

    [feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

    I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.

    Presto in the point of my use

    use reason:

    Reasons for use:

    Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

    Presto introduction:

    Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

    Working principle:

    The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

    Use the scene:

    1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

    Erschwingliche Kleinunternehmen Website design – Top-5 Hilfen

    Falls Sie diese eine, erschwingliche websiteerstellenonline.de Internetseite für Das winziges Firmen erstellen, sollten Jene bestimmte Aspekte in betracht ziehen, die Ihre Internetauftritt beeinträchtigen sachverstand. Wohl sofern Jene bei weitem nicht vorhaben, ein Vermögen herauf Ihrer Website auszugeben, acht geben° Sie für einem jener Artikel darauf, falls dasjenige Endprodukt irgendetwas ist natürlich, uff (berlinerisch) das Sie hochmut sein sachverstand.

    Hierbei sind immer wieder 5 Tipps für die Gestaltung Ihrer kleinen Unternehmenswebsite, sogar via einem Budget:

    Tipp # 1: Stellen Jene sicher, dass Ihre Internetseite darüber hinaus 5 Sekunden , alternativ geringeren voll wird: Besitzen Jene jemals probiert, eine Internetauftritt lediglich zu beobachten, mit der absicht über diagnostizieren, dass dieses 7 oder aber 20 Sekunden dauert, erst wenn sie auf Diesem Display erscheint? Es sei bekanntlich, das wurde vonseiten einem Anhaenger empfohlen oder aber Sie besitzen einen anderen brennenden Begehr, die Internetauftritt über besuchen, wahrscheinlich haben Jene voraussichtlich aufgegeben und sind weitergezogen. Die ersten 8 Sekunden sind sehr bedeutsam für die Präsent Ihres Besuchers. Während der Arbeitszeit hat Die Website voll sein, dieserfalls jener Gast durchschauen kann, worum dieses auf Ihrer Website geht. Wenn dieses länger dauert, werden Die Gast die Geduld verlieren und umziehen. Tipp: Wenn Sie eine lange Flash-Präsentation offenbaren möchten, versuchen Jene, jene auf ihrer anderen Page wie dieser Startseite anzuzeigen.

    Tipp # 2: Begrenzen Jene die Menüleiste auf 5 Optionen: Die Internetauftritt darf darüber hinaus ihrem Vorsatz singular und dabei im Aussehen das. Falls Die Internetauftritt versucht, allen Personen allesamt Kriterien zu bieten, wird jene für fast niemanden von seiten Wert sein. Die Einfachheit des weiteren Ausrichtung Ihres Website-Designs spiegelt sich mit Ihrer Navigation wider des weiteren sieht man auf Ihrer Ersten seite über Ihre Menüoptionen symbolisiert. Sofern Sie jener Auffassung sind, falls Die Internetseite mehr als drei Menüoptionen benötigt, erstellen Sie manche Untermenüs, die nur verfügbar sind immer wieder, indes der Anwender die der zwei Hauptoptionen ausgewählt hat.

    Tipp # 3: Schaffen Sie einen Aufruf zum Handeln artikuliert visuell: Immer für dem IKEA eingekauft? Die Läden besitzen dieses nicht-traditionelles Schema, dasjenige Ihnen erlaubt, gegenseitig frei umzusehen, und führt Jene buchstäblich von seiten einem Bereich zu einem anderen, geradeaus über welchen unzaehligen Registrierkassen des weiteren Essensgeschenken, die am Schluss Ihres Weges uff (berlinerisch) Jene warten. Situation Jene dies denn Vorbild für die Vorrichtung Ihrer Internetseite dienen: Auf jeder Seite haben sich verpflichtet Sie Ihren Website-Besuchern exakt verdeutlichen, was genau Jene tun wissen. Möchtest du, dass jene euch kontaktieren? Reservieren Jene Ihr Produkt , alternativ Die Dienstleistung? Erfassen eines Kommentars über Ihrer Internetauftritt? Was darüber hinaus immerzu das ist, machen Sie diesen Call-to-Action ausgesprochen unkompliziert, mit der absicht Liedertext und Skizze von überall uff (berlinerisch) dieser Internetseite abgeschlossen beobachten.

    Sittenlos 4: Darbieten Jene kostenlosen ferner eindeutigen Zugriff herauf weitere Hilfeoptionen: Sie möchten nicht die Verkäufe verlieren (oder Besuche , alternativ was auch dauernd Ihr Anliegen zu gunsten von Ihre Gast sein könnte), alleinig weil Sie das versäumt bestizen, jemandem die Chance zu in aussicht stellen, eine Anfrage über stellen. Wirklich denn für Einem Prozeduraufruf zu ihrem Sprechen, schaffen Jene allen Besuchern transparent, dass jene unabhängig davon, wo jene sich uff (berlinerisch) Ihrer Internetauftritt sein, leicht Unterstuetzung unter zuhilfenahme von Verfahrensweise, E-Mail, Live-Chat, Rückruf, Benutzerforum oder Wissensdatenbank aufgabeln sachverstand. Tipp: Darstellen Jene die unterschiedlichen Optionen mit ihrer priorisierten Klasse, abhängig von seiten allen voraussichtlichen Benutzeranforderungen.

    Tipp # 8: Offenbaren Sie Rahmen unterhalb von anderen Design-Elementen: Dasjenige Look-and-Feel Ihrer Internetseite als Ganzes ist echt wirklich alleinig die Verknüpfung aller ihrer einzelnen Komponenten. Obacht geben Jene exakt uff (berlinerisch) jede Detailkomponente Ihrer frischen Website. Verwenden Sie geeignete Farben ferner Grafiken, achten Sie auf die Schriftgröße, stellen Sie sicher, falls Die Neuigkeiten lesbar ferner geeignet sind immer wieder, ferner einstellen Sie sicher, falls die Videos transparent ferner aphrodisierend habitus. Kriterien, vonseiten jenen Sie denken, falls sie nicht ins gewicht fallend sind, könnten die Grundlage dazu bilden, ob jemand einander dazu entscheidet, auf Ihrer Internetauftritt über bleiben oder aber die eines Mitbewerbers zu finden.

    Egal, ob Jene Die eigene Internetauftritt gestalten oder aber 1 professionellen Designer justierung, acht geben° Jene auf diese Konsumgut des weiteren Sie werden diese eine, preisgekrönte Website jetzt für Das kleines Unternehmen bestizen, sogar via einem Budget.

    Using Sidecar to introduce Node.js into Spring Cloud

    Using Sidecar to introduce Node.js into Spring Cloud

    theory

    brief introduction

    Spring Cloud is a popular micro service solution at present. It combines the convenient development of Spring Boot with the rich solution of Netflix OSS. As we all know, Spring Cloud is different from Dubbo and uses Rest services based on HTTP (s) to build the whole service system.

    Is it possible to develop some Rest services using some non JVM languages, such as Node.js, which we are familiar with? Yes, of course. However, if only Rest services are available, it is not possible to access the Spring Cloud system. We also want to use the Eureka provided by Spring Cloud for service discovery, use Config Server to do configuration management, and use Ribbon to do client load balancing. At this point, Spring sidecar will be able to show its talents.

    Sidecar originated from Netflix Prana. He provides a HTTP API that allows access to all instances of established services, such as host, ports, etc. You can also use an embedded Zuul proxy service to get the relevant routing nodes from Eureka. Spring Cloud Config Server can be accessed directly through the host or through proxy Zuul.

    Netflix Prana

    What you need to be aware of is the Node.js application you have developed, and you have to implement a health check interface to allow Sidecar to report the health of this service instance to Eureka.

    In order to use Sidecar, you can create a Spring Boot program with @EnableSidecar annotation. Let’s look at what this annotation has done.

    @EnableSidecar

    @EnableCircuitBreaker

    Look, hystrix fuse, Eureka service discovery, zuul agent, all of these components have been opened.

    Health examination

    Next, we need to add the configuration of sidecar.port and sidecar.health-uri in application.yml. The sidecar.port attribute represents the port of the Node.js application listener. This is to enable sidecar to register in Eureka services. sidecar.health-uri is a URI that simulates the interface of Spring Boot application health indicators. It must return the following form of JSON document: health-uri-document

    Sidecar.port

    Sidecar.health-uri

    Sidecar.port

    Sidecar.health-uri

    Health-uri-document

    {

    The application.yml of the entire Sidecar application is as follows: application.yml

    Application.yml

    Application.yml

    Server:

    Service access

    After building this application, you can use the /hosts/{serviceId} API to get the result of DiscoveryClient.getInstances () . Here is an example of returning two instances of information from different /hosts/customers from host. If sidebar runs on the 5678 port, then the Node.js application can access the API via the http://localhost:5678/hosts/{serviceId}.

    /hosts/{serviceId}

    DiscoveryClient.getInstances ()

    /hosts/customers

    Http://localhost:5678/hosts/{serviceId}

    /hosts/customers

    [

    Zuul proxy can automatically be registered to the Eureka association to /< serviceId> services add routing, so the customer service can be accessed via the /customers URI. It is also assumed that sidecar is listening on the 5678 port, so our Node.js application can access the customer service by http://localhost:5678/customers.

    /< serviceId>

    /customers

    Http://localhost:5678/customers

    Config Server

    If we use the Config Server service and register it to Eureka, Node.js application can access it through Zull Proxy. If ConfigServer’s serviceId is configserver and Sidecar listens on the 5678 port, then it can be accessed through the

    Configserver

    Http://localhost:5678/configserver

    Node.js applications can also use the capabilities of Config Server to get some configuration documents, such as YAML format. For example, a access to http://sidecar.local.spring.io:5678/configserver/default-master.yml may get the following return of the YAML document:

    Http://sidecar.local.spring.io:5678/configserver/default-master.yml

    Eureka:

    So the whole architecture of Node.js application accessing to Spring Cloud micro service cluster through Sidecar is roughly shown as follows:

     

    Demo practice

    Let’s suppose that there is such a very simple data. It is called User:

    Class User {

    It looks very classic, Kazakhstan!

    Another data structure is used to represent books, Book:

    Class Book {

    The authorId in Book corresponds to the ID of User. Now we need to develop Rest services for these two data.

    First, User, we use spring to develop, first in the controller construction method, mock some false data users, and then a very simple Get interface based on the ID user.

    @GetMapping (“/{id}”)

    After starting, we curl visited:

    Curl localhost:8720/12

    Next, we use Node.js to develop Book related interfaces.

    Because the Node.js community is very active, the optional Rest service framework is very large. The mainstream is express, koa, hapi, and so on, very light and easy to extend like connect. Here I consider the mass base and document richness, and choose to use to develop such a Rest service that can access Spring Cloud.

    Express

    KOA

    Hapi

    Connect

    Express

    Const express = require (‘express’)

    It is also first to use faker to the mock100 bar data, and then write a simple get route.

    Faker

    After startup, we use browser to access http://localhost:3000/book/1.

    Http://localhost:3000/book/1

    Using Sidecar to introduce Node.js into Spring Cloud

     

    Now that we have two micro services, next we launch a Sidecar instance to connect Node.js to Spring Cloud.

    @SpringBootApplication

    Very simple, it needs to be noted that before this, you need a eureka-server to test the ability of the sidecar agent to access Spring Config, and I also use config-server, believing that students who are familiar with spring cloud should know.

    In the configuration of sidecar, bootstrap.yaml simply specifies the address of the service port and the config-server, and the node-sidecar.yaml configuration is as follows:

    Node-sidecar.yaml

    Eureka:

    The address of the node.js service directed by sidecar is specified here, and hystrix.command.default.execution.timeout.enabled: false is mainly because sidecar uses hystrix’s default timeout fuse for a second, and the speed of domestic access to GitHub, as you know, I often visit config-server when I test the test, so I often go out of time, so I dropped it with disable, and you could choose to extend the overtime time.

    Hystrix.command.default.execution.timeout.enabled: false

    When eureka-server, config-server, user-service, node-sidecar, node-book-service are all started, we open the main page of Eureka

    Http://localhost:8700/

    Using Sidecar to introduce Node.js into Spring Cloud

     

    See that our services are in UP state, indicating that everything is normal. Next, look at the console of the Node.js application:

    Using Sidecar to introduce Node.js into Spring Cloud

     

    It is found that the traffic has come in and the access interface is /health, which is clearly called node-sidecar’s call to our node application for health checks.

    /health

    Next is the time to witness miracles. Our curl visits the 8741 port of sidecar:

    Curl localhost:8741/user-service/12

    Consistent with the results of direct access to user-service, it shows that sidecar Zuul Proxy can proxy our request to user-service services.

    Well, with this agent, we hope that book services can provide the interface of author information:

    Const SIDECAR = {{

    We have access to http://localhost:3000/book/2/author, and you can see the author’s information of bookId to 2. But there is a problem. We do not have access to the Node.js interface by accessing http://localhost:8741/node-sidecar/book/1 as the proxy to user-service, and how to get user-service to get it What about the data? Looking at the first part of the theoretical knowledge, we can access /hosts/< serviceId> to get information about each service, and we’ll try to access http://localhost:8741/hosts/node-sidecar The following results are obtained:

    Http://localhost:3000/book/2/author

    Http://localhost:8741/node-sidecar/book/1

    /hosts/< serviceId>

    Http://localhost:8741/hosts/node-sidecar

    Using Sidecar to introduce Node.js into Spring Cloud

     

    You can see the information in the return information such as the URI of the Node.js application, so is it possible that we can first access the sidecar’s interface, and then get the real URI, and then call book-service’s /books? Uid=< uid> the interface? Of course, in fact, there’s already a tool for spring cloud to do this for us, that is, Feign, new BookFeighClient.java:

    /books? Uid=< uid>

    Feign

    BookFeighClient.java

    @FeignClient (name = “node-sidecar”)

    FeignClient can automatically find the corresponding service address on the serviceId to Eureka. If there are more than one instance of the service, the client load balance will be used by Ribbon, and a number of RequestMapping – like annotations can be used to keep the client in line with the server controller. By defining this findByUid method, we can easily call the /books? Uid=<, uid> interface defined in the above Node.js. This is also consistent with the sidecar schema we painted above.

    FeignClient

    RequestMapping

    FindByUid

    /books? Uid=< uid>

    Now, we define a new type of Author in user-service, which inherits from User and adds a books field:

    Class Author extends User {

    Add an interface to get Author:

    @GetMapping (“/author/{id}”)

    Logic is also very simple, get the corresponding user, get books from bookFeignClient from uid, and then build author to return.

    We visit the http://localhost:8720/author/11 to return the results:

    Http://localhost:8720/author/11

    Using Sidecar to introduce Node.js into Spring Cloud

     

    Well, so far, we have completed the JAVA and Node.js two languages with the help of sidecar and general HTTP protocol to complete the process of calling each other. For more similar configuration information from config-server, access to application information from Eureka and other operations, you can download the source code of my experiment to understand.

    I put the whole DEMO in my GitHub, you can directly clone down.

    Git clone https://github.com/marshalYuan/spring-cloud-example.git

    The whole project is roughly the same:

  • eureka-server / / corresponds to the Eureka Server
  • above.

    Eureka-server / / / / Eureka Server for the above figure

    Config-server / / / / Config Server for the above figure

    SearchPath of config-repo //config-server warehouse address

    The services developed by user-service //java are both service providers (Provider) and consumers (Coustomer).

    Node-sidecar / / a sidecar instance responsible for connecting node and spring-cloud.

    Rest services developed by book-service-by-node //express.js

    You can follow:

    Eureka-server -> config-server -> user-service -> book-service-by-node -> node-sidecar

    In this order, these five applications are started, because demo is used for testing, so I have no bug.

    Write at the end

    As the opening point says, thanks to the general Http protocol and the Netflix rich suite, we can connect many non JVM languages such as Node.js, PHP, and Python to the very mature micro service framework of Spring Cloud to quickly build our micro service business system. You might say why not all use Java? Indeed, the cost of developing and maintaining a single language in a single system is much lower, but there are other situations worth our choosing the sidecar solution.

    For example, the burden of history is too heavy to be cut to the Java platform, but it does not want to rewrite the services of the past, so that it can be integrated at the cost of a unified protocol, from Java to other platforms.

    There is also a saying called “hugging the language dividend”. Choosing a development language means choosing a tool and library corresponding to the programming language. For example, it is now popular to use Python to do data analysis, then the service of this part of the micro service system can be developed with Python; the asynchronous event driver mechanism of Node.js is excellent, and can it be used to develop some services that need to deal with a large number of asynchronous requests; and so on. This is not really triggering the “best language Jihad”. It is thought that the comparison between the advantages and disadvantages of language from the use of scenarios and ecology is to play gangsters. Take all as an example, and I don’t think there is any language that can be understood as simple as the code of Haskell.

    Pythagorean triple

    [(x, y, z) x < – [1..100], y < – [x..100], Z < – Z, + + = = =]

    Besides, our title is Node.js, and the best language is PHP. Escape ~ ~ ~

    The front-end rendering accelerates – Big Pipe

    The front-end rendering accelerates - Big Pipe

    Preface

    The first screen rendering speed is always a pain point on the front end

    From the most open, the direct static resource server returns the resource file directly to the CDN distribution file, then to the server rendering technology. No one is not to get the best experience for the user.

    CDN

    Big Pipe is an accelerated first screen loading technology adopted by Facebook, which can be clearly felt on the front page of Facebook.

    Big Pipe

    brief introduction

     

    It looks as if it’s the same as Ajax

    First of all, we need to know that Ajax is just another common HTTP request. The process of a complete HTTP request is

    DNS Resolving -> TCP handshake -> HTTP Request -> Server -> Server;

    The entire network link has spent quite a lot of time.

    Big Pipe only needs to use one connection without additional requests.

    Big Pipe

    The technology behind Big Pipe is not really complex, and the server is passed on to a browser without a closed < body> , at this time the browser will render the DOM that has been received (if there is CSS, also on the rendering). But, at this point, the TCP connection has not been disconnected, < body> has not yet been closed. The server can continue to push more DOM to browsers, or even < script> .

    Big Pipe

    < body>

    < body>

    < script>

    In this way, the browser can take a page without data (the corresponding data display module to display the load in the load), at the same time to get the data from the database, and then push the < script> tag and put the data in it. After the browser receives it, it can replace the corresponding data.

    < script>

    The difference from the server rendering

    Server rendering and Big Pipe have a lot of similar places, it is also the server to get data, fill it into the web DOM, return to the customer. But the biggest difference is that the Big Pipe can return a page to the user before getting the data to reduce the waiting time to prevent the operation resistance of the data. Plug too long, and keep a blank page to the user.

    Big Pipe

    Big Pipe

    The code used in the example of this article

    The full project can be seen: https://github.com/joesonw/bigpipe-example

    Https://github.com/joesonw/bigpipe-example

    'use strict';

    Android application performance optimization – startup acceleration

    Android application performance optimization - startup acceleration

    In the recent study of Android performance optimization, the first to solve the problem of jumping Caton when opening a web page in the application, trying to introduce third party WebView components, but introducing another problem, the initialization of the third party WebView component is placed in the Application, which leads to the delay of the App startup time. Long. Today, we will talk about how to optimize acceleration from two aspects of Application and Activity.

    The start time of a.Application acceleration App is the time the user clicks on app icon to the first interface of the app to give the user the time it takes, shortening this time and quickly displaying the first interface to the user, which can greatly improve the user’s experience. There are two main ways to optimize Application, one is to reduce the execution time of the onCreate method in Application, and one is to speed up the first interface with theme and Drawable.

    1. reduce the execution time of the onCreate method

    Using Android studio to build an application, we will find that the start speed is very fast, but as the complexity of the application increases, the integration of the third party components is increasing, and the initialization of the third party components is increasing in onCreate. This is a clear discovery that the start of the App is carton, the first interface is presented. Before the white screen or black screen time increased, this is due to the onCreate execution time is too long. To solve this problem, IntentService can be used to handle time-consuming initialization operations.

    The code of IntentService is as follows:

    Public static void start (Context context) {Intent intent = new Intent (context, DwdInitService.class);};}

    @Override protected void onHandleIntent (Intent intent) {if (intent! = null) {final String action = {}; {} {};}}}

    In Application, start can

    DwdInitService.start (this);

    A X5WebView initialization operation is made, and the effect is quite obvious.

    2. optimize the presentation of the first interface

    As I mentioned earlier, when a App starts, there will always be a white screen or a black screen. It is particularly bad from the user experience. How to eliminate this white screen? Here we can use custom theme and Drawable to solve this problem. Here we use a simple demo as a case: the following renderings

     

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < /FrameLayout>

    The code is simple, but every time you start, you will find that there will be a white screen before the page is displayed. Now the code is transformed as follows

    A. defines a Drawable of loading.xml

    Setting up the background and logo pictures here

    B.style defines a theme, and windowBackground sets the background to loading.xml.

    & lt; style name=& quot; Theme.Default.NoActionBar& quot; parent=& quot; @style/AppTheme& quot; & plurality; dialectical; dialectical; dialectical; dialectical;

    C. sets the defined theme to LoadingActivity

    Ok, completed, now start App, white screen is missing, user experience has also been improved.

    Optimal acceleration of two.Activity

    After entering App, the speed of the jump between pages is also an important part of the user experience, such as opening an embedded web page, after clicking the trigger button, a card will appear after the jump to the past.

    The optimization of a Activity is also to reduce the execution time of the onCreate method. The onCreate method often consists of two parts. First, setContentView () is used to implement the layout, the first is to initialize and fill data in onCreate.

    The second point is easier to understand, and the time consuming data reading and computing work is minimized in onCreate, and the asynchronous callback can be used to reduce the occupancy of the UI main thread.

    Now, from setContentView, each of the controls in the layout needs to be initialized, arranged, and drawn, which are mostly time consuming operations to reduce the display speed. And in the case of no time-consuming data manipulation in onCreate, monitoring setContentView () through the TraceView tool almost takes up 99% of all time from the beginning of onCreate () to the end of onResume ().

    Reduce the time spent on setContentView:

    1. reduce layout nesting level

    A. uses relative layout

    Reduce the use of linear layout, use relative layout as far as possible, and reduce nesting levels. Nested multiple LinearLayout instances that use layout_weight properties will cost more, because each of the sub layouts will measure two times; although the relative layout is tedious, it can reduce the nesting level and reduce the drawing time.

    B. use

     

    Use

     

    After the merge tag is used, the layout level is reduced accordingly. C. controls its own properties by controlling its own properties, reducing nesting levels, such as common linear arrangement menu layout, as follows

    Android application performance optimization - startup acceleration

     

    & lt; LinearLayout android:layout_width=& quot; match_parent& quot; android:layout_height=& quot; 62dip& quot. EW android:layout_width=& quot; wrap_content& quot; android:layout_height=& quot; wrap_content& quot; android:src=& quot. Out_width=& quot; match_parent& quot; android:layout_height=& quot; wrap_content& quot; android:layout_marginLeft=& quot; 15dp& Ot; android:textSize=& quot; 18sp& quot; /& gt; & lt; /LinearLayout& gt;

    The drawableRight code for using the properties of TextView is as follows

    & lt; TextView android:id=& quot; @+id/my_order& quot; android:layout_width=& quot; match_parent& quot. Quot; 15dip& quot; android:gravity=& quot; center_vertical& quot; android:paddingLeft=& quot; 28dip& quot;

    The amount of code and nesting levels are reduced accordingly, and the effect is perfect.

    2. using ViewStub delay expansion

    ViewStub is a lightweight and invisible view. When needed, it can be used to postpone the expansion of the layout in your own layout. It is also a way to expand the layout by inflate when you need to expand and expand the layout in onResume by setting the flag bit.

    Features: (1).ViewStub can only be Inflate once, and then ViewStub objects will be emptied. According to the sentence, if a layout specified by ViewStub is Inflate, it will not be controlled by ViewStub again. (2).ViewStub can only be used for Inflate layout file instead of a specific View. Of course, View can be written in a layout file. Usage scenario: (1) during the running of a program, a layout will not change after Inflate, unless it is restarted. (complex layout) (2). To control display and hide is a layout file instead of a View. Case: after optimization with ViewStub, the expansion time can be reduced from 1/2 to 2/3.

    Use the code to expand the layout in the onCreate () method.

    ViewStub viewStub = (ViewStub) findViewById (R.id.viewstub_demo_image); viewStub.inflate ();

    Communication between Android components

    Communication between Android components

    First, let’s sort out the ways in which we communicate between different components in Andrew.

    (Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

    Mode 1: use Intent to pass the value: (between Activity and Activity)

    Value example:

    Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

    Value example:

    Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

    Mode two: use Binder to transmit values (between Activity and Service).

    1. define Service

    In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

    Public class MyService extends Service {/ / instantiate the Binder class that you define.

    2.Activity binding Service

    It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

    Public class MyBindingActivity extends Activity {

    Way three: use Broadcast broadcast transmission value

    In fact, it uses Broadcast’s sending and receiving to realize communication.

    Send an instance of Broadcast:

    Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

    Receive the Broadcast instance:

    / / dynamically register broadcast public void registerMessageReceiver () {

    Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

    It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

    Mode five: use the interface:

    It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

    To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

    Mode six: EventBus:

    EventBus class library introduction

    EventBus is an optimized Android system class library in publish / bus mode.

     

    Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

    EventBus uses three steps

    Define events: public class MessageEvent {/ * Additional fields if needed * /}

    Prepare subscribers: eventBus.register (this);

    Public void onEvent (AnyEventType event)} / * Do something * /};

    Post events:

    EventBus.post (event);

    The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

    Http://blog.csdn.net/jdsjlzx/article/details/40856535

    EventBus’s problem?

    Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?