The front end is from the beginning to the re – entry – Asynchronous

What is

asynchronous? That’s about the language of JavaScript.

What is asynchronous? That’s about the language of JavaScript.

Born in 1995, JavaScript was designed as a single thread as the script language of browser at that time. That is to say, it can only do one thing at the same time. Why is it like this? Imagine how the browser should respond when the JavaScript is multithreaded, and the user adds content to a node and deletes the node in a thread.

JavaScript solves the complex synchronization problem in browsers with single thread, but a single thread must introduce a concept, that is, the task queue. In a civilized society, there are many demands and few items, so we must queue up. When a task ends, the next task begins. Does it seem perfect, but is there any defect? Yes, we have. For example, when you go to your favorite noodle shop, you find a long queue at the door, but the seats inside are empty. The process of this noodle shop is: customer order, small second and down – customers and small second eyes stare eyes – the kitchen gives the food to the second, the second gives the customer – the customer comes to the table with the food – the end.

Such a process is the most important place is the customer and small eyes stare in the stage, time is wasted on it, so a long queue of people are not satisfied. How should we optimize it?

After discussing with the noodle shop owner, this optimization scheme is given.

Customer order, small second and down – small two give a brand to customers, representing the number of customers – customers sit on the table and start to chat with the mobile phone – the kitchen to the second, the second according to the number to the customer – end.

As soon as the boss heard this well, there were fewer people waiting in line, and the efficiency of the second grade was higher, so he invited me to eat the best beef noodles they had.

This is asynchronous, the time-consuming operation is executed elsewhere, and the result is put back to the queue after execution. In this way, other operations can be done at this time, rather than blocking here. In other words, how does JavaScript realize asynchronous? It’s the callback function.

callback

The callback function is called callback in English, and it is simply to call the callback function after a task is executed, and the callback function can get the result of this task. In the JavaScript world, there is a callback function everywhere. The simplest is the setTimeout method.

SetTimeout (() => alert (‘1000 millisecond after the emergence of me! “), 1000)

The above code is actually waiting for 1000 milliseconds, () => alert ('1000 millisecond after me! ")

() => alert (I appeared after’1000 milliseconds! ‘)

The callback function is easy to use and well understood, but how to use multiple asynchronous data together? What is the necessary condition for an asynchronous task? What is the result of another asynchronous task? Some people began to write nested callback, commonly known as the callback of Pyramid.

Fun1 (function (value1) {)

This code is very common in high frequency asynchronous processing such as node. It’s very well written. After a day, you look back at your code, what are you writing about?

At this point, we will embark on a road to find the best solution for asynchronous processing of front-end. As the title says, you think you are getting started. Actually, there is another door in front of you, commonly known as Moore’s law.

EventProxy and publish / subscribe patterns

As the author of this library is Pu Ling

Pipling

There is no such thing as deep nesting of callbacks in the world. Jackson Tian

There is no nesting callback in the world, and many people write, and there is }}}}}}}}}}}}.

}}}}}}}}}}}}

Fengmk2

EventProxy

EventProxy

If you want to get data from several addresses asynchronously and process all the data after getting all the data, the simplest way to write a counter in addition to the callback to Pyramid is to write a counter.

Var count = 0;

After the use of EventProxy:

Var proxy = new eventproxy ();

This is a typical event publish / subscribe mode. Let’s throw away the code and start with what is publish / subscribe mode.

Publish / subscribe mode is also called observer mode. The publish / subscribe mode defines a one to many dependency relationship that allows multiple subscribers to listen to a topic object at the same time. The subject object will notify all subscriber objects when their state changes, so that they can update their status automatically.

In a word, your favorite chef’s chef returned home to marry a daughter-in-law, and the beef noodles made by the new chef did not suit your taste, so you called the restaurant every day to ask the old chef to come back. The boss could not bear it any more. Every day he received a similar phone call to get mad and asked me how to deal with it. I said simply, write down the phone call of everyone who wants to eat the beef chef, and tell them that the old chefs come and text them. So the boss’s phone was finally quiet.

The above example of mental retardation is publish / subscribe mode, which is often encountered in our daily life. The boss is the publisher, the customer is the subscriber, the customer subscribes the information which the old chef comes back, when the old chef comes back, the boss releases this news to the customer. What are the benefits of a publish / subscribe pattern? It can decouple logic, publishers need not care about the specific business logic of subscribers, and do not care about how many subscribers, they can deliver messages to subscribers. Subscribers do not have to ask the publisher if they have any news, but they will get the message he wants at the first time.

Back to EventProxy, proxy.emit ('data1_event', data) in the above example.

Proxy.emit (‘data1_event’, data)

Proxy.all

The specific implementation principle of EventProxy comes from the event module of Backbone, which is interested in going to its github

GitHub

Promise

What is promise? The Chinese meaning of promise is commitment.

I remember that my first contact with promise was in my first angular project, when the code of the project contained many promise.then ().Then () .

Promise.then ().Then ()

At that time, I naively assumed that promise only had angular, and later discovered that $q in angular was just one of the promise implementations. The entire code base of AngularJS relies heavily on Promise, including the framework and the application code you write with it.

Go back to the point. Many third party libraries have implemented promise, such as when, $q for angular, etc. They follow the same specification: Promises/A+

Promises/A+

The so-called Promise is a container that holds the result of an event that will end in the future (usually an asynchronous operation). A promise may have three states: incomplete, completed, and failed. The state of a promise can never be completed or transferred to completion or failure. It can not be converted backward, and has been completed and failed. At the same time, it is important to note that promise objects are passed between them.

Take a chestnut: you ordered a meal at the noodle shop, and Xiao 2 gave you a meal card. This card is no use for you. You can neither eat nor sell money, but you have to pay for the meal, which is an unfinished state. When the kitchen is ready, Xiao two will take the beef noodle for your meal card, and you get the beef noodles you want to eat. This is the state that has been completed. The beef in the kitchen is sold out, and the second runner comes to apologize to you for letting you change a bowl of noodles or withdraw your money, which is the failure and the handling after failure. Failure has not been directly converted to completion. Do you want to change a bowl or go through the promise process, which is completed and failed and can not be converted to each other.

After knowing the theory of promise, let’s look at the original promise:.

Var promise = new Promise (function func (resolve, reject) {

The variable promise here is an instance of Promise’s object. If there is no error after the logic process, you can get the beef noodle in the first callback function in then, and if there is a mistake, you will do the wrong processing in the second callback function in the then.

Promise also provides a Promise.all () .

Promise.all ()

Promise.all

Https://developer.mozilla.org/zh-CN/docs/Web/JavaScript/Reference/Global_Objects/Promise/all

The biggest advantage of promise is that it solves the problem of deep nesting of callbacks through chain calls. It looks elegant and easy to understand and use. But do you think this is all about JavaScript asynchronous programming?

The CO of Generator and TJ

The Chinese meaning of Generator is a generator. In the JavaScript world, functions are not suspended after being executed, only the state of “invoked” and “not invoked”. What happens when the function can be suspended?

Generator is the function that can be suspended. Its essence can be understood as a special data structure. Compared with ordinary functions, it has a *.

*

*

So how does it implement a pause? Through the yield keyword and next method in the function. Let’s look at an example to understand the writing method briefly.

Var g = function* (a, b) {

Let’s look at the following example step by step.

1. first, we create a Generator function named G.

2. instantiate this function, named generator. The generator at this time is a Generator object, and he has a next method. When you do not call next, the Generator function is suspended until it is executed internally to the first yield.

3. when we first called generator.next ()

Generator.next ()

{value: 3, done: false}

4. the second call is like above. When the third call is made, the Generator function has no yield keyword, so the done returned is true.

It is important to note that .next ()

.next ()

Generator.next ()

Var g = function* (a, b) {

This example is similar to the previous one, except that we added yield to the variable sum. When the first implementation of generator.next ()

Generator.next ()

Generator.next ()

{value: 3, done: false}

Generator.next (“hello”)

Generator.next (“hello”)

{value: undefined, done: true}

Var sum = “hello”

Console.log (sum)

Hello

In this way, data can be transmitted dynamically, making code logic more flexible and at the same time capable of achieving our goal: asynchronous optimization. How do I use Generator for asynchronous? The first thing you can think of is to get the value passed through generator.next (data) after asynchronous processing.

Generator.next (data)

Var promise = new Promise (function func (resolve) {{

In this example, I first created a promise to simulate asynchronous requests, and then yield promise in the Generator function. The first execution of generator.next () ()

Generator.next ()

Generator.next (data)

Hello

Although asynchronous is a bit like synchronization now, do you think that manually calling generator.next () after example is very cumbersome and ugly? It’s not ugly or ugly.

Can you put it in a simple way? Yes. Next I want to introduce the co library written by TJ, a famous programmer. The co library is a Generator based function library. The main purpose is to automatically call .next () .

.next ()

As early as the Generator function of CO, only one trunk function was yield, 1.1.0 added promise support after 1.1.0, and later slowly began to support generators, generator functions, object, array.

Var co = require (‘co’);

Save the manual call to .next () ()

.next ()

Since the co function returns a promise object after the 4.0.0 version, you can add callbacks:

Var co = require (‘co’);

In the earlier version (1.4.0) Co provided a join.

Join

/ / array

Co also allows you to use try/catch in the function body.

Try/catch

Https://github.com/tj/co/blob/master/index.js

It should be noted that Generator is a method of ES6. But now node is released to version 7, and it can be directly supported. Please use Babel at the front end. In addition, if you just try it, you can turn on the chrome console. Because Chrome is the V8 kernel, you can use Generator and promise directly.

Async/await

Earlier, we talked about Co. Is there any official implementation of such a good asynchronous program? Congratulations, in this era of rapid evolution of JavaScript, there is a similar plan for the government. This is async/await.

This program that appears in ES7 is actually the syntax sugar of Generator. Let’s take a look at the following example:

Const promise = new Promise (resolve) => {

Is it the same as co 10 Fen! The only difference is the *

*

Async

Await

.next ()

If we want multiple asynchronous concurrency execution, similar to that in EventProxy, we can combine Promise.all

Promise.all

Const promise1 = new Promise (resolve) => {

Compared with Generator, async/await is semantically more clear, async

Async

Await

Await

Such a cool syntax to use, in the node version of the 7 version has been supported by the original, in the front end can be compiled by Babel to achieve. Of course, the universal chrome console can also be used directly.

Tail

I heard Rx.js for the first time at Ning JS Conference (forgive me for being ignorant), which is also a good asynchronous processing plan. After that, write an introduction to Rx.js. There are so many asynchronous solutions in the front end, but in the final analysis, it is because people are afraid of callback. These solutions have a common feature, that is, more and more like synchronous code, and the advantage is that the code is more readable, and the thinking doesn’t jump with the callback jump. To conclude, this article is only an introduction, and does not involve specific implementations. If we want to learn them well, we need to go deep into the code to understand their thoughts behind them.

Reference resources

https://github.com/alsotang/node-lessons/tree/master/lesson4

Https://github.com/alsotang/node-lessons/tree/master/lesson4

http://www.infoq.com/cn/articles/generator-and-asynchronous-programming

Http://www.infoq.com/cn/articles/generator-and-asynchronous-programming

The front-end rendering accelerates – Big Pipe

The front-end rendering accelerates - Big Pipe

Preface

The first screen rendering speed is always a pain point on the front end

From the most open, the direct static resource server returns the resource file directly to the CDN distribution file, then to the server rendering technology. No one is not to get the best experience for the user.

CDN

Big Pipe is an accelerated first screen loading technology adopted by Facebook, which can be clearly felt on the front page of Facebook.

Big Pipe

brief introduction

 

It looks as if it’s the same as Ajax

First of all, we need to know that Ajax is just another common HTTP request. The process of a complete HTTP request is

DNS Resolving -> TCP handshake -> HTTP Request -> Server -> Server;

The entire network link has spent quite a lot of time.

Big Pipe only needs to use one connection without additional requests.

Big Pipe

The technology behind Big Pipe is not really complex, and the server is passed on to a browser without a closed < body> , at this time the browser will render the DOM that has been received (if there is CSS, also on the rendering). But, at this point, the TCP connection has not been disconnected, < body> has not yet been closed. The server can continue to push more DOM to browsers, or even < script> .

Big Pipe

< body>

< body>

< script>

In this way, the browser can take a page without data (the corresponding data display module to display the load in the load), at the same time to get the data from the database, and then push the < script> tag and put the data in it. After the browser receives it, it can replace the corresponding data.

< script>

The difference from the server rendering

Server rendering and Big Pipe have a lot of similar places, it is also the server to get data, fill it into the web DOM, return to the customer. But the biggest difference is that the Big Pipe can return a page to the user before getting the data to reduce the waiting time to prevent the operation resistance of the data. Plug too long, and keep a blank page to the user.

Big Pipe

Big Pipe

The code used in the example of this article

The full project can be seen: https://github.com/joesonw/bigpipe-example

Https://github.com/joesonw/bigpipe-example

'use strict';

Android application performance optimization – startup acceleration

Android application performance optimization - startup acceleration

In the recent study of Android performance optimization, the first to solve the problem of jumping Caton when opening a web page in the application, trying to introduce third party WebView components, but introducing another problem, the initialization of the third party WebView component is placed in the Application, which leads to the delay of the App startup time. Long. Today, we will talk about how to optimize acceleration from two aspects of Application and Activity.

The start time of a.Application acceleration App is the time the user clicks on app icon to the first interface of the app to give the user the time it takes, shortening this time and quickly displaying the first interface to the user, which can greatly improve the user’s experience. There are two main ways to optimize Application, one is to reduce the execution time of the onCreate method in Application, and one is to speed up the first interface with theme and Drawable.

1. reduce the execution time of the onCreate method

Using Android studio to build an application, we will find that the start speed is very fast, but as the complexity of the application increases, the integration of the third party components is increasing, and the initialization of the third party components is increasing in onCreate. This is a clear discovery that the start of the App is carton, the first interface is presented. Before the white screen or black screen time increased, this is due to the onCreate execution time is too long. To solve this problem, IntentService can be used to handle time-consuming initialization operations.

The code of IntentService is as follows:

Public static void start (Context context) {Intent intent = new Intent (context, DwdInitService.class);};}

@Override protected void onHandleIntent (Intent intent) {if (intent! = null) {final String action = {}; {} {};}}}

In Application, start can

DwdInitService.start (this);

A X5WebView initialization operation is made, and the effect is quite obvious.

2. optimize the presentation of the first interface

As I mentioned earlier, when a App starts, there will always be a white screen or a black screen. It is particularly bad from the user experience. How to eliminate this white screen? Here we can use custom theme and Drawable to solve this problem. Here we use a simple demo as a case: the following renderings

 

< ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

< ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

< /FrameLayout>

The code is simple, but every time you start, you will find that there will be a white screen before the page is displayed. Now the code is transformed as follows

A. defines a Drawable of loading.xml

Setting up the background and logo pictures here

B.style defines a theme, and windowBackground sets the background to loading.xml.

& lt; style name=& quot; Theme.Default.NoActionBar& quot; parent=& quot; @style/AppTheme& quot; & plurality; dialectical; dialectical; dialectical; dialectical;

C. sets the defined theme to LoadingActivity

Ok, completed, now start App, white screen is missing, user experience has also been improved.

Optimal acceleration of two.Activity

After entering App, the speed of the jump between pages is also an important part of the user experience, such as opening an embedded web page, after clicking the trigger button, a card will appear after the jump to the past.

The optimization of a Activity is also to reduce the execution time of the onCreate method. The onCreate method often consists of two parts. First, setContentView () is used to implement the layout, the first is to initialize and fill data in onCreate.

The second point is easier to understand, and the time consuming data reading and computing work is minimized in onCreate, and the asynchronous callback can be used to reduce the occupancy of the UI main thread.

Now, from setContentView, each of the controls in the layout needs to be initialized, arranged, and drawn, which are mostly time consuming operations to reduce the display speed. And in the case of no time-consuming data manipulation in onCreate, monitoring setContentView () through the TraceView tool almost takes up 99% of all time from the beginning of onCreate () to the end of onResume ().

Reduce the time spent on setContentView:

1. reduce layout nesting level

A. uses relative layout

Reduce the use of linear layout, use relative layout as far as possible, and reduce nesting levels. Nested multiple LinearLayout instances that use layout_weight properties will cost more, because each of the sub layouts will measure two times; although the relative layout is tedious, it can reduce the nesting level and reduce the drawing time.

B. use

 

Use

 

After the merge tag is used, the layout level is reduced accordingly. C. controls its own properties by controlling its own properties, reducing nesting levels, such as common linear arrangement menu layout, as follows

Android application performance optimization - startup acceleration

 

& lt; LinearLayout android:layout_width=& quot; match_parent& quot; android:layout_height=& quot; 62dip& quot. EW android:layout_width=& quot; wrap_content& quot; android:layout_height=& quot; wrap_content& quot; android:src=& quot. Out_width=& quot; match_parent& quot; android:layout_height=& quot; wrap_content& quot; android:layout_marginLeft=& quot; 15dp& Ot; android:textSize=& quot; 18sp& quot; /& gt; & lt; /LinearLayout& gt;

The drawableRight code for using the properties of TextView is as follows

& lt; TextView android:id=& quot; @+id/my_order& quot; android:layout_width=& quot; match_parent& quot. Quot; 15dip& quot; android:gravity=& quot; center_vertical& quot; android:paddingLeft=& quot; 28dip& quot;

The amount of code and nesting levels are reduced accordingly, and the effect is perfect.

2. using ViewStub delay expansion

ViewStub is a lightweight and invisible view. When needed, it can be used to postpone the expansion of the layout in your own layout. It is also a way to expand the layout by inflate when you need to expand and expand the layout in onResume by setting the flag bit.

Features: (1).ViewStub can only be Inflate once, and then ViewStub objects will be emptied. According to the sentence, if a layout specified by ViewStub is Inflate, it will not be controlled by ViewStub again. (2).ViewStub can only be used for Inflate layout file instead of a specific View. Of course, View can be written in a layout file. Usage scenario: (1) during the running of a program, a layout will not change after Inflate, unless it is restarted. (complex layout) (2). To control display and hide is a layout file instead of a View. Case: after optimization with ViewStub, the expansion time can be reduced from 1/2 to 2/3.

Use the code to expand the layout in the onCreate () method.

ViewStub viewStub = (ViewStub) findViewById (R.id.viewstub_demo_image); viewStub.inflate ();

Communication between Android components

Communication between Android components

First, let’s sort out the ways in which we communicate between different components in Andrew.

(Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

Mode 1: use Intent to pass the value: (between Activity and Activity)

Value example:

Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

Value example:

Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

Mode two: use Binder to transmit values (between Activity and Service).

1. define Service

In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

Public class MyService extends Service {/ / instantiate the Binder class that you define.

2.Activity binding Service

It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

Public class MyBindingActivity extends Activity {

Way three: use Broadcast broadcast transmission value

In fact, it uses Broadcast’s sending and receiving to realize communication.

Send an instance of Broadcast:

Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

Receive the Broadcast instance:

/ / dynamically register broadcast public void registerMessageReceiver () {

Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

Mode five: use the interface:

It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

Mode six: EventBus:

EventBus class library introduction

EventBus is an optimized Android system class library in publish / bus mode.

 

Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

EventBus uses three steps

Define events: public class MessageEvent {/ * Additional fields if needed * /}

Prepare subscribers: eventBus.register (this);

Public void onEvent (AnyEventType event)} / * Do something * /};

Post events:

EventBus.post (event);

The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

Http://blog.csdn.net/jdsjlzx/article/details/40856535

EventBus’s problem?

Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

Android Service active attack and defense

company launched a point in June 15, similar to travel software applications need to upload real-time latitude and longitude, involved in the backstage Service to survive the problem. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

One, why do you want to live?

The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

Two. The means of keeping alive

1. Modify the return value of the onStartCommand method of Service

Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

[feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

2, Service onDestory method restarted

After onDestory sends a broadcast and receives the broadcast, it restarts the Service.

@Override

[feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

3. Improve the Service priority

Increase priority in Service registration

< service android:name= “com.dwd.service.LocationService” android:exported= “false” >

[feasibility] this method is invalid for Service, and service has no such attribute.

4. Front desk service

The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

[feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

5. Process Guardians

There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

[feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

6, monitoring system broadcasting

By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

[feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

7. Interoperability between applications

Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

[feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

8, activity a pixel point

After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

[feasibility] will still be killed.

9. Install APK to /system/app and transform it to system level application.

[feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

10, using the account and synchronization mechanism provided by Android system.

Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

/ / establish account number

[feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

11, white list

Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

[feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.

Presto in the point of my use

use reason:

Reasons for use:

Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

Presto introduction:

Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

Working principle:

The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

Use the scene:

1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

Using Sidecar to introduce Node.js into Spring Cloud

Using Sidecar to introduce Node.js into Spring Cloud

theory

brief introduction

Spring Cloud is a popular micro service solution at present. It combines the convenient development of Spring Boot with the rich solution of Netflix OSS. As we all know, Spring Cloud is different from Dubbo and uses Rest services based on HTTP (s) to build the whole service system.

Is it possible to develop some Rest services using some non JVM languages, such as Node.js, which we are familiar with? Yes, of course. However, if only Rest services are available, it is not possible to access the Spring Cloud system. We also want to use the Eureka provided by Spring Cloud for service discovery, use Config Server to do configuration management, and use Ribbon to do client load balancing. At this point, Spring sidecar will be able to show its talents.

Sidecar originated from Netflix Prana. He provides a HTTP API that allows access to all instances of established services, such as host, ports, etc. You can also use an embedded Zuul proxy service to get the relevant routing nodes from Eureka. Spring Cloud Config Server can be accessed directly through the host or through proxy Zuul.

Netflix Prana

What you need to be aware of is the Node.js application you have developed, and you have to implement a health check interface to allow Sidecar to report the health of this service instance to Eureka.

In order to use Sidecar, you can create a Spring Boot program with @EnableSidecar annotation. Let’s look at what this annotation has done.

@EnableSidecar

@EnableCircuitBreaker

Look, hystrix fuse, Eureka service discovery, zuul agent, all of these components have been opened.

Health examination

Next, we need to add the configuration of sidecar.port and sidecar.health-uri in application.yml. The sidecar.port attribute represents the port of the Node.js application listener. This is to enable sidecar to register in Eureka services. sidecar.health-uri is a URI that simulates the interface of Spring Boot application health indicators. It must return the following form of JSON document: health-uri-document

Sidecar.port

Sidecar.health-uri

Sidecar.port

Sidecar.health-uri

Health-uri-document

{

The application.yml of the entire Sidecar application is as follows: application.yml

Application.yml

Application.yml

Server:

Service access

After building this application, you can use the /hosts/{serviceId} API to get the result of DiscoveryClient.getInstances () . Here is an example of returning two instances of information from different /hosts/customers from host. If sidebar runs on the 5678 port, then the Node.js application can access the API via the http://localhost:5678/hosts/{serviceId}.

/hosts/{serviceId}

DiscoveryClient.getInstances ()

/hosts/customers

Http://localhost:5678/hosts/{serviceId}

/hosts/customers

[

Zuul proxy can automatically be registered to the Eureka association to /< serviceId> services add routing, so the customer service can be accessed via the /customers URI. It is also assumed that sidecar is listening on the 5678 port, so our Node.js application can access the customer service by http://localhost:5678/customers.

/< serviceId>

/customers

Http://localhost:5678/customers

Config Server

If we use the Config Server service and register it to Eureka, Node.js application can access it through Zull Proxy. If ConfigServer’s serviceId is configserver and Sidecar listens on the 5678 port, then it can be accessed through the

Configserver

Http://localhost:5678/configserver

Node.js applications can also use the capabilities of Config Server to get some configuration documents, such as YAML format. For example, a access to http://sidecar.local.spring.io:5678/configserver/default-master.yml may get the following return of the YAML document:

Http://sidecar.local.spring.io:5678/configserver/default-master.yml

Eureka:

So the whole architecture of Node.js application accessing to Spring Cloud micro service cluster through Sidecar is roughly shown as follows:

 

Demo practice

Let’s suppose that there is such a very simple data. It is called User:

Class User {

It looks very classic, Kazakhstan!

Another data structure is used to represent books, Book:

Class Book {

The authorId in Book corresponds to the ID of User. Now we need to develop Rest services for these two data.

First, User, we use spring to develop, first in the controller construction method, mock some false data users, and then a very simple Get interface based on the ID user.

@GetMapping (“/{id}”)

After starting, we curl visited:

Curl localhost:8720/12

Next, we use Node.js to develop Book related interfaces.

Because the Node.js community is very active, the optional Rest service framework is very large. The mainstream is express, koa, hapi, and so on, very light and easy to extend like connect. Here I consider the mass base and document richness, and choose to use to develop such a Rest service that can access Spring Cloud.

Express

KOA

Hapi

Connect

Express

Const express = require (‘express’)

It is also first to use faker to the mock100 bar data, and then write a simple get route.

Faker

After startup, we use browser to access http://localhost:3000/book/1.

Http://localhost:3000/book/1

Using Sidecar to introduce Node.js into Spring Cloud

 

Now that we have two micro services, next we launch a Sidecar instance to connect Node.js to Spring Cloud.

@SpringBootApplication

Very simple, it needs to be noted that before this, you need a eureka-server to test the ability of the sidecar agent to access Spring Config, and I also use config-server, believing that students who are familiar with spring cloud should know.

In the configuration of sidecar, bootstrap.yaml simply specifies the address of the service port and the config-server, and the node-sidecar.yaml configuration is as follows:

Node-sidecar.yaml

Eureka:

The address of the node.js service directed by sidecar is specified here, and hystrix.command.default.execution.timeout.enabled: false is mainly because sidecar uses hystrix’s default timeout fuse for a second, and the speed of domestic access to GitHub, as you know, I often visit config-server when I test the test, so I often go out of time, so I dropped it with disable, and you could choose to extend the overtime time.

Hystrix.command.default.execution.timeout.enabled: false

When eureka-server, config-server, user-service, node-sidecar, node-book-service are all started, we open the main page of Eureka

Http://localhost:8700/

Using Sidecar to introduce Node.js into Spring Cloud

 

See that our services are in UP state, indicating that everything is normal. Next, look at the console of the Node.js application:

Using Sidecar to introduce Node.js into Spring Cloud

 

It is found that the traffic has come in and the access interface is /health, which is clearly called node-sidecar’s call to our node application for health checks.

/health

Next is the time to witness miracles. Our curl visits the 8741 port of sidecar:

Curl localhost:8741/user-service/12

Consistent with the results of direct access to user-service, it shows that sidecar Zuul Proxy can proxy our request to user-service services.

Well, with this agent, we hope that book services can provide the interface of author information:

Const SIDECAR = {{

We have access to http://localhost:3000/book/2/author, and you can see the author’s information of bookId to 2. But there is a problem. We do not have access to the Node.js interface by accessing http://localhost:8741/node-sidecar/book/1 as the proxy to user-service, and how to get user-service to get it What about the data? Looking at the first part of the theoretical knowledge, we can access /hosts/< serviceId> to get information about each service, and we’ll try to access http://localhost:8741/hosts/node-sidecar The following results are obtained:

Http://localhost:3000/book/2/author

Http://localhost:8741/node-sidecar/book/1

/hosts/< serviceId>

Http://localhost:8741/hosts/node-sidecar

Using Sidecar to introduce Node.js into Spring Cloud

 

You can see the information in the return information such as the URI of the Node.js application, so is it possible that we can first access the sidecar’s interface, and then get the real URI, and then call book-service’s /books? Uid=< uid> the interface? Of course, in fact, there’s already a tool for spring cloud to do this for us, that is, Feign, new BookFeighClient.java:

/books? Uid=< uid>

Feign

BookFeighClient.java

@FeignClient (name = “node-sidecar”)

FeignClient can automatically find the corresponding service address on the serviceId to Eureka. If there are more than one instance of the service, the client load balance will be used by Ribbon, and a number of RequestMapping – like annotations can be used to keep the client in line with the server controller. By defining this findByUid method, we can easily call the /books? Uid=<, uid> interface defined in the above Node.js. This is also consistent with the sidecar schema we painted above.

FeignClient

RequestMapping

FindByUid

/books? Uid=< uid>

Now, we define a new type of Author in user-service, which inherits from User and adds a books field:

Class Author extends User {

Add an interface to get Author:

@GetMapping (“/author/{id}”)

Logic is also very simple, get the corresponding user, get books from bookFeignClient from uid, and then build author to return.

We visit the http://localhost:8720/author/11 to return the results:

Http://localhost:8720/author/11

Using Sidecar to introduce Node.js into Spring Cloud

 

Well, so far, we have completed the JAVA and Node.js two languages with the help of sidecar and general HTTP protocol to complete the process of calling each other. For more similar configuration information from config-server, access to application information from Eureka and other operations, you can download the source code of my experiment to understand.

I put the whole DEMO in my GitHub, you can directly clone down.

Git clone https://github.com/marshalYuan/spring-cloud-example.git

The whole project is roughly the same:

  • eureka-server / / corresponds to the Eureka Server
  • above.

    Eureka-server / / / / Eureka Server for the above figure

    Config-server / / / / Config Server for the above figure

    SearchPath of config-repo //config-server warehouse address

    The services developed by user-service //java are both service providers (Provider) and consumers (Coustomer).

    Node-sidecar / / a sidecar instance responsible for connecting node and spring-cloud.

    Rest services developed by book-service-by-node //express.js

    You can follow:

    Eureka-server -> config-server -> user-service -> book-service-by-node -> node-sidecar

    In this order, these five applications are started, because demo is used for testing, so I have no bug.

    Write at the end

    As the opening point says, thanks to the general Http protocol and the Netflix rich suite, we can connect many non JVM languages such as Node.js, PHP, and Python to the very mature micro service framework of Spring Cloud to quickly build our micro service business system. You might say why not all use Java? Indeed, the cost of developing and maintaining a single language in a single system is much lower, but there are other situations worth our choosing the sidecar solution.

    For example, the burden of history is too heavy to be cut to the Java platform, but it does not want to rewrite the services of the past, so that it can be integrated at the cost of a unified protocol, from Java to other platforms.

    There is also a saying called “hugging the language dividend”. Choosing a development language means choosing a tool and library corresponding to the programming language. For example, it is now popular to use Python to do data analysis, then the service of this part of the micro service system can be developed with Python; the asynchronous event driver mechanism of Node.js is excellent, and can it be used to develop some services that need to deal with a large number of asynchronous requests; and so on. This is not really triggering the “best language Jihad”. It is thought that the comparison between the advantages and disadvantages of language from the use of scenarios and ecology is to play gangsters. Take all as an example, and I don’t think there is any language that can be understood as simple as the code of Haskell.

    Pythagorean triple

    [(x, y, z) x < – [1..100], y < – [x..100], Z < – Z, + + = = =]

    Besides, our title is Node.js, and the best language is PHP. Escape ~ ~ ~

    Android application performance optimization – startup acceleration

    Android application performance optimization - startup acceleration

    In the recent study of Android performance optimization, the first to solve the problem of jumping Caton when opening a web page in the application, trying to introduce third party WebView components, but introducing another problem, the initialization of the third party WebView component is placed in the Application, which leads to the delay of the App startup time. Long. Today, we will talk about how to optimize acceleration from two aspects of Application and Activity.

    The start time of a.Application acceleration App is the time the user clicks on app icon to the first interface of the app to give the user the time it takes, shortening this time and quickly displaying the first interface to the user, which can greatly improve the user’s experience. There are two main ways to optimize Application, one is to reduce the execution time of the onCreate method in Application, and one is to speed up the first interface with theme and Drawable.

    1. reduce the execution time of the onCreate method

    Using Android studio to build an application, we will find that the start speed is very fast, but as the complexity of the application increases, the integration of the third party components is increasing, and the initialization of the third party components is increasing in onCreate. This is a clear discovery that the start of the App is carton, the first interface is presented. Before the white screen or black screen time increased, this is due to the onCreate execution time is too long. To solve this problem, IntentService can be used to handle time-consuming initialization operations.

    The code of IntentService is as follows:

    Public static void start (Context context) {Intent intent = new Intent (context, DwdInitService.class);};}

    @Override protected void onHandleIntent (Intent intent) {if (intent! = null) {final String action = {}; {} {};}}}

    In Application, start can

    DwdInitService.start (this);

    A X5WebView initialization operation is made, and the effect is quite obvious.

    2. optimize the presentation of the first interface

    As I mentioned earlier, when a App starts, there will always be a white screen or a black screen. It is particularly bad from the user experience. How to eliminate this white screen? Here we can use custom theme and Drawable to solve this problem. Here we use a simple demo as a case: the following renderings

     

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < ImageView android:layout_width= "wrap_content" android:layout_height= "wrap_content" android:layout_gravity= "center" android:layout_marginBottom= "24dp" android:src= "@drawable/ic_launcher";

    < /FrameLayout>

    The code is simple, but every time you start, you will find that there will be a white screen before the page is displayed. Now the code is transformed as follows

    A. defines a Drawable of loading.xml

    Setting up the background and logo pictures here

    B.style defines a theme, and windowBackground sets the background to loading.xml.

    & lt; style name=& quot; Theme.Default.NoActionBar& quot; parent=& quot; @style/AppTheme& quot; & plurality; dialectical; dialectical; dialectical; dialectical;

    C. sets the defined theme to LoadingActivity

    Ok, completed, now start App, white screen is missing, user experience has also been improved.

    Optimal acceleration of two.Activity

    After entering App, the speed of the jump between pages is also an important part of the user experience, such as opening an embedded web page, after clicking the trigger button, a card will appear after the jump to the past.

    The optimization of a Activity is also to reduce the execution time of the onCreate method. The onCreate method often consists of two parts. First, setContentView () is used to implement the layout, the first is to initialize and fill data in onCreate.

    The second point is easier to understand, and the time consuming data reading and computing work is minimized in onCreate, and the asynchronous callback can be used to reduce the occupancy of the UI main thread.

    Now, from setContentView, each of the controls in the layout needs to be initialized, arranged, and drawn, which are mostly time consuming operations to reduce the display speed. And in the case of no time-consuming data manipulation in onCreate, monitoring setContentView () through the TraceView tool almost takes up 99% of all time from the beginning of onCreate () to the end of onResume ().

    Reduce the time spent on setContentView:

    1. reduce layout nesting level

    A. uses relative layout

    Reduce the use of linear layout, use relative layout as far as possible, and reduce nesting levels. Nested multiple LinearLayout instances that use layout_weight properties will cost more, because each of the sub layouts will measure two times; although the relative layout is tedious, it can reduce the nesting level and reduce the drawing time.

    B. use

     

    Use

     

    After the merge tag is used, the layout level is reduced accordingly. C. controls its own properties by controlling its own properties, reducing nesting levels, such as common linear arrangement menu layout, as follows

    Android application performance optimization - startup acceleration

     

    & lt; LinearLayout android:layout_width=& quot; match_parent& quot; android:layout_height=& quot; 62dip& quot. EW android:layout_width=& quot; wrap_content& quot; android:layout_height=& quot; wrap_content& quot; android:src=& quot. Out_width=& quot; match_parent& quot; android:layout_height=& quot; wrap_content& quot; android:layout_marginLeft=& quot; 15dp& Ot; android:textSize=& quot; 18sp& quot; /& gt; & lt; /LinearLayout& gt;

    The drawableRight code for using the properties of TextView is as follows

    & lt; TextView android:id=& quot; @+id/my_order& quot; android:layout_width=& quot; match_parent& quot. Quot; 15dip& quot; android:gravity=& quot; center_vertical& quot; android:paddingLeft=& quot; 28dip& quot;

    The amount of code and nesting levels are reduced accordingly, and the effect is perfect.

    2. using ViewStub delay expansion

    ViewStub is a lightweight and invisible view. When needed, it can be used to postpone the expansion of the layout in your own layout. It is also a way to expand the layout by inflate when you need to expand and expand the layout in onResume by setting the flag bit.

    Features: (1).ViewStub can only be Inflate once, and then ViewStub objects will be emptied. According to the sentence, if a layout specified by ViewStub is Inflate, it will not be controlled by ViewStub again. (2).ViewStub can only be used for Inflate layout file instead of a specific View. Of course, View can be written in a layout file. Usage scenario: (1) during the running of a program, a layout will not change after Inflate, unless it is restarted. (complex layout) (2). To control display and hide is a layout file instead of a View. Case: after optimization with ViewStub, the expansion time can be reduced from 1/2 to 2/3.

    Use the code to expand the layout in the onCreate () method.

    ViewStub viewStub = (ViewStub) findViewById (R.id.viewstub_demo_image); viewStub.inflate ();

    Communication between Android components

    Communication between Android components

    First, let’s sort out the ways in which we communicate between different components in Andrew.

    (Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

    Mode 1: use Intent to pass the value: (between Activity and Activity)

    Value example:

    Intent intent=new Intent (); intent.putExtra (“extra”, “Activity1”); intent.setClass (Activity1.this, Activity2.class); startActivity (intent);

    Value example:

    Intent intent=getIntent (); String data=intent.getStringExtra (“extra”); TextView tv_data= (TextView) findViewById (R.id.tv_data); tv_data.setText (data);

    Mode two: use Binder to transmit values (between Activity and Service).

    1. define Service

    In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

    Public class MyService extends Service {/ / instantiate the Binder class that you define.

    2.Activity binding Service

    It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

    Public class MyBindingActivity extends Activity {

    Way three: use Broadcast broadcast transmission value

    In fact, it uses Broadcast’s sending and receiving to realize communication.

    Send an instance of Broadcast:

    Static final String ACTION_BROAD_TEST = “com.my.broad.test”; / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

    Receive the Broadcast instance:

    / / dynamically register broadcast public void registerMessageReceiver () {

    Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

    It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

    Mode five: use the interface:

    It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

    To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

    Mode six: EventBus:

    EventBus class library introduction

    EventBus is an optimized Android system class library in publish / bus mode.

     

    Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

    EventBus uses three steps

    Define events: public class MessageEvent {/ * Additional fields if needed * /}

    Prepare subscribers: eventBus.register (this);

    Public void onEvent (AnyEventType event)} / * Do something * /};

    Post events:

    EventBus.post (event);

    The following is an example of EventBus use on the Internet: http://blog.csdn.net/jdsjlzx/article/details/40856535

    Http://blog.csdn.net/jdsjlzx/article/details/40856535

    EventBus’s problem?

    Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

    Android Service active attack and defense

    company launched a point in June 15, similar to travel software applications need to upload real-time latitude and longitude, involved in the backstage Service to survive the problem. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

    One, why do you want to live?

    The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

    Two. The means of keeping alive

    1. Modify the return value of the onStartCommand method of Service

    Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

    START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

    START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

    START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

    [feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

    2, Service onDestory method restarted

    After onDestory sends a broadcast and receives the broadcast, it restarts the Service.

    @Override

    [feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

    3. Improve the Service priority

    Increase priority in Service registration

    < service android:name= “com.dwd.service.LocationService” android:exported= “false” >

    [feasibility] this method is invalid for Service, and service has no such attribute.

    4. Front desk service

    The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

    NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

    [feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

    5. Process Guardians

    There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

    [feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

    6, monitoring system broadcasting

    By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

    [feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

    7. Interoperability between applications

    Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

    [feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

    8, activity a pixel point

    After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

    [feasibility] will still be killed.

    9. Install APK to /system/app and transform it to system level application.

    [feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

    10, using the account and synchronization mechanism provided by Android system.

    Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

    / / establish account number

    [feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

    11, white list

    Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

    [feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

    I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.