Android custom controls to achieve color progressive circular progress

First glance, the effect of custom controls is shown below.

The effect of this control is still very common, but it is limited to the Android system which does not provide such effect control, so it can only enrich its own clothing.

In the Android project, some of the Android system source controls need to be implemented because of the requirements of the product requirements or the requirements of the UI effect diagram. So custom controls are required to achieve the relevant effects. Therefore, custom controls are also a basic work for the development of Android. This article introduces the relevant process of custom controls based on the inheritance of View and the use of Canvas to draw graphics.

1. customize CircleProgressBar, inherit View, and implement the constructor of the response.

The code is as follows:

/ * * * * *

* Created by WangChunLei

* * /

Public class GradientProgressBar extends View {

Public GradientProgressBar (Context context) {

Super (context);

Init ();


Public GradientProgressBar (Context context, AttributeSet attrs) {

Super (context, attrs);

Init ();


Public GradientProgressBar (Context context, AttributeSet attrs, int defStyleAttr) {

Super (context, attrs, defStyleAttr);

Init ();



The init method is the initialization method for the related brush. The init method code is as follows:

Private void init () {

BackCirclePaint = new Paint ();

BackCirclePaint.setStyle (Paint.Style.STROKE);

BackCirclePaint.setAntiAlias (true);

BackCirclePaint.setColor (Color.LTGRAY);

BackCirclePaint.setStrokeWidth (circleBorderWidth);

/ / backCirclePaint.setMaskFilter (New BlurMaskFilter (20, BlurMaskFilter.Blur.OUTER));

GradientCirclePaint = new Paint ();

GradientCirclePaint.setStyle (Paint.Style.STROKE);

GradientCirclePaint.setAntiAlias (true);

GradientCirclePaint.setColor (Color.LTGRAY);

GradientCirclePaint.setStrokeWidth (circleBorderWidth);

LinePaint = new Paint ();

LinePaint.setColor (Color.WHITE);

LinePaint.setStrokeWidth (5);

TextPaint = new Paint ();

TextPaint.setAntiAlias (true);

TextPaint.setTextSize (textSize);

TextPaint.setColor (Color.BLACK);


2. the width and height of the control control -onMeasure

OnMeasure is the first step of custom control. The purpose is to measure the width and height of the control. The code of the onMeasure method is as follows:


Protected void onMeasure (int widthMeasureSpec, int heightMeasureSpec) {

Int measureWidth = MeasureSpec.getSize (widthMeasureSpec);

Int measureHeight = MeasureSpec.getSize (heightMeasureSpec);

SetMeasuredDimension (Math.min (measureWidth, measureHeight), Math.min (measureWidth, measureHeight));


After the onMeasure code is posted, it is estimated that it is rarely seen as simple as the measurement process such a simple onMeasure, do not mind, the interested colleagues can refine this measurement process, the different measurement modes are processed and measured separately, so that the control is better and more perfect. In the onMeasure method, we get the desired width and height separately and take the smaller size as the width and height of the control.

3., in turn, draw different components of the control.

Because the control is directly inherited from the View, it does not need to reprocess the onLayout method, which is one of the differences between the custom View and the custom ViewGroup, but the inheritance of the ViewGroup does not necessarily have to rewrite the onMeasure.

To achieve the effect as shown, we need to implement the following steps in turn.

(1) draw a gray hollow ring

(2) draw a ring of color gradient

(3) draw the white lines divided on the ring

(4) draw percentage words and so on.

If the content of the rendering process is intersecting with the content previously drawn, the content that is drawn later will cover the content before it is drawn. According to the steps mentioned above, the following member variables will be generated during the rendering process.

/ * circular arc linewidth * /

Private float circleBorderWidth = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_DIP, 20, getResources ().GetDisplayMetrics ());

/ * internal margin * /

Private float circlePadding = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_DIP, 20, getResources ().GetDisplayMetrics ());

/ * font size * /

Private float textSize = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_SP, 50, getResources ().GetDisplayMetrics ());

/ * to draw a circumference brush /

Private Paint backCirclePaint;

/ * a brush drawing a circumferential white line

Private Paint linePaint;

/ *

Private Paint textPaint;

/ * percentage * / /

Private int percent = 0;

/ * the gradient of the circle color array * /

Private int[] gradientColorArray = new int[]{Color.GREEN, Color.parseColor (" #fe751a&quot); Color.parseColor (" #13be23"); "

Private Paint gradientCirclePaint;

3.1 draw a gray hollow ring

The code is as follows:

//1. draws a gray background circle

Canvas.drawArc (

New RectF (circlePadding * 2, circlePadding * 2,

GetMeasuredWidth () – circlePadding * 2, getMeasuredHeight () – circlePadding * 2, -90, 360, false, backCirclePaint);

Among them, -90 is the initial angle to draw the arc, and the 360 is the angle drawn by the circle, that is, sweepAngle..

3.2 draw a ring of color gradient

//2. draws the color gradient circle

LinearGradient linearGradient = new LinearGradient (circlePadding, circlePadding).

GetMeasuredWidth () – circlePadding,

GetMeasuredHeight () – circlePadding,

GradientColorArray, null, Shader.TileMode.MIRROR);

GradientCirclePaint.setShader (linearGradient);

GradientCirclePaint.setShadowLayer (10, 10, 10, Color.RED);

Canvas.drawArc (

New RectF (circlePadding * 2, circlePadding * 2,

GetMeasuredWidth () – circlePadding * 2, getMeasuredHeight () – circlePadding * 2, -90, (float) (percent / 100) * 360, false, gradientCirclePaint);

Among them, linearGradient is the shadow of Paint, which needs to be set for the color gradient effect of the arc. The application frequency in the daily development is not high, but it is true that it can achieve a very ideal color gradient effect.

3.3 draw the white lines divided on the ring

When drawing the white lines on the arc, some simple operations are needed, such as the starting coordinates of the lines startX, the startY and the terminating coordinates of the lines, stopX, stopY, etc., which are easily calculated using a simple trigonometric function. In the effect, the circular arc is divided into 100 points with white lines. Each class is 1, which can meet the percentage of the int type and the proportion of the effect map.

/ / / radius

Float radius = (getMeasuredWidth () – circlePadding * 3) / 2;

//X axis center point coordinates

Int centerX = getMeasuredWidth () / 2;

//3. draws 100 line segments to cut into hollow arcs

For (float I = 0; I < 360; I + = 3.6) {

Double rad = I * Math.PI / 180;

Float startX = (float) (centerX + (radius circleBorderWidth) * Math.sin (RAD));

Float startY = (float) (centerX + (radius circleBorderWidth) * Math.cos (RAD));

Float stopX = (float) (centerX + radius * Math.sin (RAD) + 1);

Float stopY = (float) (centerX + radius * Math.cos (RAD) + 1);

Canvas.drawLine (startX, startY, stopX, stopY, linePaint);


3.4 draw percentage text and finally draw percentage text. In order to draw text, in order to keep the center point of the text and the origin of the arc, it is necessary to measure the width and height of the text first, and then carry out some simple calculations. The principle is no longer described. I believe that everyone is better than me.

//4. drawing text

Float textWidth = textPaint.measureText (percent + "%");

Int textHeight = (int) (Math.ceil (textPaint.getFontMetrics ().Descent – textPaint.getFontMetrics ().Ascent) + 2);

Canvas.drawText (percent + "%" centerX textWidth / 2, centerX + textHeight / 4, textPaint);

Finally, a common method is changed to change the percentage of display. The code is as follows:

/ * * * * *

* setting percentage


* @param percent

* * /

Public void setPercent (int percent) {

If (percent < 0) {

Percent = 0;

} else if (percent > 100) {

Percent = 100;


This.percent = percent;

Invalidate ();


At this point, all drawing processes are outlined, and 130 lines of code can achieve very cool effects. Finally, paste the complete code of the project for students who are too lazy to see the implementation process, O (O) ha ha.

Package com.example.myview;

Import android.content.Context;


Import android.util.AttributeSet;

Import android.util.TypedValue;

Import android.view.View;

/ * * * * *

* Created by WangChunLei

* * /

Public class GradientProgressBar extends View {

/ * circular arc linewidth * /

Private float circleBorderWidth = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_DIP, 20, getResources ().GetDisplayMetrics ());

/ * internal margin * /

Private float circlePadding = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_DIP, 20, getResources ().GetDisplayMetrics ());

/ * font size * /

Private float textSize = TypedValue.applyDimension (TypedValue.COMPLEX_UNIT_SP, 50, getResources ().GetDisplayMetrics ());

/ * to draw a circumference brush /

Private Paint backCirclePaint;

/ * a brush drawing a circumferential white line

Private Paint linePaint;

/ *

Private Paint textPaint;

/ * percentage * / /

Private int percent = 0;

/ * the gradient of the circle color array * /

Private int[] gradientColorArray = new int[]{Color.GREEN, Color.parseColor (" #fe751a&quot); Color.parseColor (" #13be23"); "

Private Paint gradientCirclePaint;

Public GradientProgressBar (Context context) {

Super (context);

Init ();


Public GradientProgressBar (Context context, AttributeSet attrs) {

Super (context, attrs);

Init ();


Public GradientProgressBar (Context context, AttributeSet attrs, int defStyleAttr) {

Super (context, attrs, defStyleAttr);

Init ();


Private void init () {

BackCirclePaint = new Paint ();

BackCirclePaint.setStyle (Paint.Style.STROKE);

BackCirclePaint.setAntiAlias (true);

BackCirclePaint.setColor (Color.LTGRAY);

BackCirclePaint.setStrokeWidth (circleBorderWidth);

/ / backCirclePaint.setMaskFilter (New BlurMaskFilter (20, BlurMaskFilter.Blur.OUTER));

GradientCirclePaint = new Paint ();

Web optimization training camp, web page speed 50 times


We will use a complete example to optimize loading, rendering and other experiences step by step.


First, let’s look at the file composition of the project

This includes a basic web page element, JS (React App), CSS, and pictures.

Related resources see

Let’s take a look at the whole page of serve first.


‘use strict’;

Const FS = require (‘fs’);

Const path = require (‘path’);

Const koa = require (‘koa’);

Const app = koa ();

App.use (function* (next) {)

Const file = this.path.slice (1)’index.html’;

Try {

Const content = yield CB => fs.readFile (path.resolve (‘./dist’, file), CB);

This.body = content;

This.type = path.extname (file).Slice (1);

This.status = 200;

} catch (E) {

This.status = 404;


Yield next;


App.listen (process.env.PORT 3000);

This code is simply a forwarding of the files in the dist directory.

When you open the web page, you can see the related load.

As we can see, the whole app.js is 277kb, and in the case of the analog 3G network (blue frame), each load takes 999ms, and the download costs 911ms (red frame).

Next we will gradually optimize, and then compare the results every time.

Optimization (1) – – 304

The most common one in Web loading optimization is 304 Not Modified, the specific mechanism is the browser launch request, the headers contains If-Modified-Since, (such as without caching, the header field), the server compares the time of the final modification of the file on the hard disk (or in the memory), if the result is less than or equal to the request time, then return 304. otherwise, It returns 200 and adds the Last-Modified field, telling the client that the next request can try to ask whether there is a cache.

The specific code is as follows:

App.use (function* () {()

Const file = path.resolve (__dirname, path.resolve (‘dist’, this.path.slice (1)’index.html’));

Const headers = this.headers;

Let ifLastModified = this.headers[‘if-modified-since’];

If (ifLastModified) {

IfLastModified = new Date (ifLastModified);


Try {

Const stat = yield CB => fs.stat (file, CB);

Const now = ();

If (ifLastModified & &

File! = = path.resolve (__dirname, path.resolve (‘dist/index.html’)) {

If (ifLastModified > = stat.mtime) {

This.status = 304;




Console.log (file)

Const content = yield CB => fs.readFile (file, CB);

This.body = content;

This.type = path.extname (file).Slice (1);

This.status = 200;

This.set (‘Last-Modified’, stat.mtime);

} catch (E) {

This.status = 404;



(simulation of the actual situation, the home page will be dynamically generated, add some ads, tracking or personalized data, index.html is not cached).

Final effect:

We can see that the download time is 2ms, which can be almost ignored (only HTTP Headers), and the total load time is only 120ms, and 869ms. is a lot less 869ms. than before.

But, are we satisfied?

Optimization (two) — package separately

We can notice that we pack up only one JS file in the end, when the dependency is much changed (in this case only react and react-dom, each modification causes the entire JS file to be rerequested. So we want to extract different library (even the common code modules within the project).

We first need to create a webpack.vendors.config.js to build these library, or vendor..

Const path = require (‘path’);

Const WebpackCleanupPlugin = require (‘webpack-cleanup-plugin’);

Const HtmlWebpackPlugin = require (‘html-webpack-plugin’);

Const webpack = require (‘webpack’);

Const ExtractTextPlugin = require (" extract-text-webpack-plugin");

Module.exports = {{

Plugins: [

New webpack.DefinePlugin ({

‘process.env’: {

NODE_ENV:’" production" ‘,



New webpack.optimize.OccurenceOrderPlugin (),

New webpack.optimize.UglifyJsPlugin ({

Compress: {

Warnings: false,

Screw_ie8: true,

Drop_console: true,

Drop_debugger: true,



New webpack.DllPlugin ({

Path: path.resolve (__dirname,’dist/vendor/[name]-manifest.json’),


Context: ‘.’,




Entry: {

‘react’: [‘react’,’react-dom’],


Output: {

Path: path.resolve (__dirname,’dist/vendor’),





Be aware

Entry: {

‘react’: [‘react’,’react-dom’],


It means that we can package the same type of package into a JS file.

Of course, we also need to make some modifications to webpack.production.js.

Const DLLs = fs.readdirSync (path.resolve (__dirname,’dist/vendor/’))

.filter (file => path.extname (file) = = =’.js’)

.map (file => path.basename (file,’.js’))

Const dllReferencePlugins = DLLs

.map (DLL =>

New webpack.DllReferencePlugin ({

Context: ‘.’,

Manifest: require (`./dist/vendor/${dll}-manifest.json`),



Module.exports = {{

Plugins: dllReferencePlugins.concat (



Here, we will automatically scan the files under the vendor directory and load all the vendor automatically.

In this way, we implemented subpackage loading (and some details were revised, including index.html, see GitHub, Step-2 branch).

The result is pretty good. App.js only needs more than 400 ms for loading alone, which is faster than at least half of all loading.

For general types of websites, the optimization has achieved very good results, but for large websites, we can still do a lot.

Optimization (three) – forced caching

We can note that optimization, one 304 of the request still takes more than 100 milliseconds, for large sites, and a lot of resources, this is still a very small expense. So can we save this? The answer is yes.

In the browser cache, there is a special field. Expires, which can specify the expiration time of the file until that moment, and the browser will not reinitiate the request, but read directly from the local cache.

However, it still needs to be requested every other time. How should we do it? The answer is, set extra long cache time, for example, 10 years. But then we can’t update anything. How can we use such features and easily update it.

We can add the hash feature value to the file name so that it can be reloaded only when the content of the file is changed, and it is suitable for distributed CDN, non overlay release, which can make it use the new resource in the case that the reference page (the front page) has been changed (the current server has been published), and the access is not available. When the server is published, it will still refer to the old resources, so that the Publication no longer needs to stay up late.

The details of the changes are seen in Git branch step-3.

Implementation results: you can see from the blue box that the cache has already taken effect, and the overall read time is only 20 milliseconds.

From the original 1000 milliseconds to the current 20 milliseconds, the simple three steps can make your webpage load 50 times faster.

Extended reading

1. in actual production, we usually see the loaded CDN domain name. Why?

This is because a large web site, the request will bring a lot of Cookie, some even close to the 1KB, and the 100 Picture loading, is the full 100KB. through the third party domain name (different from the current domain name), we can save a lot of unnecessary request head, Cookie head. Also to achieve the speed of the purpose.

2. another situation is that resources are distributed on different servers.

This is because browsers limit the number of concurrent downloads of resources under the same domain name.

The use of different resource servers can avoid this restriction and increase the number of downloads. However, the same cache hit rate is the same problem, so it also needs to store the data related to the user’s cache.

3. other methods

With the rapid development of technology, there are still many technologies that can enhance the experience of end-users.

BigPipe + Server-Side Rendering speeds up home page loading speed.

Goole AMP


The past and present of JavaScript’s prototype and prototype chain (two)

3.1. Archetypal objects

In the last article, we talked about the prototype attribute, which points to an object, and the object is used to include attributes and methods that can be shared by all instances of a particular type. In the standard, we call this object a prototype object. Prototyping objects are generated by a set of specific rules when creating a new function.

Since there is an attribute of prototype, why did the browser in the last article capture __proto__? According to the introduction in the ECMA-262 fifth edition, the pointer inside the instance is clearly called [[Prototype]], although there is no standard way to access the pointer, but Firefox, Safari, and Chrome support an attribute __proto__ on each object, so the __proto__ you see is actually a access interface of the browser’s own implementation. It’s not the standard set. But in fact, it is better to follow the browser’s design.

Having said so much, ask a question: who is the connection between __proto__ and whom? I can think about it well.

Although [[Prototype]] can not be accessed, the isPrototypeof () method can be used to determine whether there is a relationship between objects, or getPrototypeof () can be used to get the value of the [[Prototype]].

Since the properties of the prototype object can also be accessed in the instantiated object, then what is the way to determine whether the attributes of the access are in the instantiated object or in the prototype object? The answer is isOwnProperty ().

The for-in loop traverses all properties that can be accessed and enumerated by object, whether in the instance or in the prototype. In the 6.2 section of JavaScript advanced programming (Third Edition), there is such a redundancy:

The instance attribute that shields the properties of the prototype (which is about to mark [[Enumerble]] as false) will also be returned in the for-in loop. According to the author’s understanding, if you define a property that is the same as the name in the prototype in the instance property and the attribute is enumerated in the prototype, then for-in will still return the property. This is actually a very obvious proposition because for-in finds the property in an instance, and the attribute is enumerable (unless you manually set it to the enumerated).

It is troublesome to enumerate all enumerated attributes one by one. Fortunately, ES5 provides Object.keys () method to get all enumerated attributes. If you want to get all instance properties, you can use Object.getOwnPropertyNames ().

It has been said that the prototype model has shortcomings, and its biggest disadvantage is its shared characteristics, and any one of the attributes in the prototype object can affect all the objects it instantiated, which causes the phenomenon of “seeking the same and saving the difference”. So we will use the following one more.

3.2. Use constructor mode and prototype mode together.

In combination, of course, all the shared attributes are placed in the prototype object, and all the unique attributes are placed in the constructor, so that the true “seek the same and save the difference”. For example:

Function Animal () { = name;

This.type = type;

This.say = function () {

Console.log (‘I am a ‘+ this.type);



Animal.prototype = {{

Constructor: Animal;

FeetCount: 0;

Run: function () {

Console.log (‘I can run’);



Var dog = new Animal (‘WangWang’,’dog’);

Be careful:

Why does Animal.prototype reassign constructor here?

Children’s shoes combined with an article can think about it!!

Can we consider optimizing the code above to reduce the amount of code? The dynamic prototype mode can be used at this time:

Function Animal () { = name;

This.type = type;

This.say = function () {

Console.log (‘I am a ‘+ this.type);


If (typeof! =’function’) {

Animal.prototype.feetCount = 0; = function () {

Console.log (‘I can run’);




Var dog = new Animal (‘WangWang’,’dog’);

Be careful:

Why can’t we use the font size of the object in the above example when initializing the prototype here?

Children’s shoes can also be thought for themselves!

In the JavaScript advanced programming (Third Edition), two patterns are also introduced to create objects: the parasitic constructor mode and the Durable constructor pattern, and the details can be referred to books.

4, archetypal chain

Presumably here, you should have been able to guess the implementation principle of the prototype chain. ES5 uses the prototype chain as the main way to implement inheritance (because the function does not have a signature). Interface inheritance cannot be implemented in ES5). The basic idea is to use the prototype to make one reference type inherit the properties and methods of another reference type. Let’s improve the last section of the picture 3:

It can be seen that through [[prototype]] attribute, the prototype object of instance, prototype object and prototype object is concatenate. This connection is prototype chain.

The same prototype chain also has two problems: a problem with a reference type value in a + prototype (that is, the problem just in the 3.1 section) + cannot pass parameters to the supertype constructor when creating an instance of a subtype.

Therefore, the commonly used solutions include the following:

4.2. Combination inheritance

Combination inheritance is sometimes referred to as pseudo classical inheritance. The scheme combines the constructor (reference 4.2.1 section) and the prototype chain. Look at the following examples:

Function Species (name, type) { = name;

This.type = type;

} = function () {

Console.log (‘I can run! “);


Function Animal (name, type, age) { (this, name, type);

This.age = 0;


Animal.prototype = new Species;

Animal.prototype.constructor = Animal;

Animal.prototype.reportAge = function () {

Console.log (‘My age is’ + this.age);


Var dog = new Animal (‘WangWang’,’dog’, 11); ();

Dog.reportAge ();

The prototype chain diagram of the code is as follows:

4.2.1, borrowing constructor

Using the constructor (constructor stealing) (sometimes called a forgery object or classic inheritance), the principle is simple, that is, the supertype constructor is called within the subtype constructor. For example:

Function Species () {

This.colors= [‘red’,’green’];


Function Animal (type, name) { (this);

This.type = type; = name;


Var dog = new Animal (‘dog’,’WangWang’);

Dog.colors.push (‘black’);

Var cat = new Animal (‘cat’,’MiMi’);

Cat.colors.push (‘yellow’);

Because using the call function method, when the Species superclass is instantiated, the this pointer points to the instantiated subclass, which is equivalent to colors as the attribute of the subclass Animal, so each instance’s colors is its own private property. As follows:

Because by using the call method, we can also pass parameters to the superclass.

Function Species (feet) {

This.colors= [‘red’,’green’];

This.feet = feet


Function Animal (type, name, feet) { (this, feet);

This.type = type; = name;


The problem caused by this method is also the common disease of the constructor – that is, the method cannot be reused, each instance has its own method of an instance, so it is generally used in the above way.

4.3. Other schemes

In addition to the combination of inheritance, there are three methods, such as prototype inheritance, parasitic inheritance, parasitic combination inheritance, and so on. The following three methods are not much used, so no introduction is made, and the details can be referred to “JavaScript advanced program design (Third Edition)”.

5. Summary

Through this article, from the first simplest object to the constructor pattern and the prototype pattern creation object, we can see the strong vitality of the language and become more and more interesting in the continuous optimization. We have also grasped the way of creating objects from these evolutions, and studied the obscure concepts of archetype and prototype chain. A thorough study of a concept is often about the history of its evolution, so we must know the future and never forget it.

6. Reference

[1] JavaScript advanced programming (Third Edition) sixth chapter

[2] MDN

[3] ES5 standard

The past and present of JavaScript’s prototype and prototype chain (1)

You don’t have to be frightened by the name of this feeling. I’m not going to say the history of the prototype. This article just wants to help you understand why the prototype and the prototype chain is a unique language, and other languages (or the programming words I’ve learned) have not seen this concept. It was also the most puzzling puzzle when I switched from C to JavaScript.

1. First from JavaScript to create objects

As we all know, JavaScript is an object oriented language, but there is no concept of class (except the current ES6 standard). Personally feel that ES6 is a new standard encapsulated on ES5, and its essence is ES5. Therefore, mastering ES5 is the essence. There is no concept of class, but there must be the concept of object, and the object of JS is different from other object oriented languages (such as C++). Each object is based on a reference type (such as Array/Date/Function and so on, which belongs to the reference type, with a specific reference to the fifth chapter of the JavaScript advanced programming (Third Edition)) or a custom type.

The most common way to create objects before was (by creating a Object instance):

Var animal = new Object (); =’WangWang’;

Animal.type =’dog’;

Animal.say = function () {

Console.log (‘I am a ‘+ this.type);


After that, a method of creating an object literal is presented.

Var Animal= {



Say: function () {

Console.log (‘I am a ‘+ this.type);



First of all, it is clear that an object must contain attributes and methods: name and type are definitely attributes, and say is definitely a method. Second, attributes have corresponding properties inside the browser, which are used by the internal JS engine.

1.1. Talk about the properties in the JS object (Property)

In accordance with the ES5 standard, attributes are a name that we know in the impression, a value, similar to the form of a key pair, in fact, there is a large article inside the browser.

Attributes are divided into data attributes (‘Data Property’) and accessor properties (Accessor Property). The name and type that have just been defined are data attributes, and the basis for distinguishing data attributes and accessor data is that the accessor property has a [[Get]] and [[Set]] method and it does not contain a [[value]] feature (Attribute).

The data attribute contains 4 characteristics: [[configurable]], [[Enumerable]], [[Writable]] and [[Value]].

The accessor property contains 4 properties: [[Configurable]], [[Enumerable]], [[Get]], [[Set]].

Although these features are internal use of browsers, ES5 still provides the interface for us to call:

Object.defineProperty (obj, prop, descriptor);

Object.defineProperties (obj, props);

Object.getOwnPropertyDescriptor (obj, prop);

Object.getOwnPropertyDescriptors (obj);

Take an example (in the Chrome console):

> Object.getOwnPropertyDescriptor (Person, " name")

> Object {value: " WangWang" writable: true, enumerable: true, configurable: true}

More details of these 4 API (such as compatibility) can be used as reference: MDN

2. Create the advance of the JS object

Although the Object constructor or object literal can be used to create a single object, it is obvious that there is a clear flaw: this method of mixing object creation and object instantiation directly leads to code cannot be reused and a heap of repeated code will be generated, so to solve this problem A new way of creating objects — factory mode has been created. This form begins to approach the instantiation of classes and objects in the C++ language, and is closer to the actual code development.

2.1. Factory model

A very image of the name, as soon as we hear the name, we know that there is a factory at that time. As long as we provide the raw materials, we can use the mold of the factory to help us create the objects we want (that is, the instantiation process).

Because ES5 is unable to create classes, you can only use functions to encapsulate the details of creating objects with specific interfaces. For example:

Function createAnimal (name, type) {

Var o = new Object (); = name;

O.type = type;

O.say = function () {

Console.log (‘I am a ‘+ this.type);


Return o;


Var dog = createAnimal (‘WangWang’,’dog’);

Although this approach solves the problem of object instantiation code repetition, it does not solve the problem of object recognition (that is, the object created in this way can’t know its type, for example, the object created by the second previous methods can know that the type of its object is Person). So there is another way to create objects.

2.2, constructor pattern

Constructors are a basic concept in the C++ language, whose function is to initialize calls after instantiating an object, and to perform some replication operations, which can be considered as an initialization function. Similarly, the constructor used by JS is different from C++ in nature, but its essence is the same. JS provides some native constructors such as Object/Array/String, etc., and can also create custom ones. For example:

Function Animal (name, type) { = name;

This.type = type;

This.say = function () {

Console.log (‘I am a ‘+ this.type);



Var dog = new Animal (‘WangWang’,’dog’);

This constructor has the following three features:

There is no explicit creation of objects

Assign attributes and methods directly to the this object

No return statement

In performing new operations, you will experience the following 4 steps:

Create an object

Assign the scope of the constructor to the new object (so the this pointer points to the new object).

Execute the code in the constructor

Return a new object

At this time dog is an instance of Animal. According to the tradition of the C++ language, each instance must have an attribute called constructor, and JS is the same, and the constructor attribute of the instance in JS points to the Animal constructor.

The first three methods create the same object, so take one and compare it with factory mode.

It can be seen that the constructor method is indeed more than one attribute, and why these attributes are concentrated in __proto__ is exactly what we should mention later.

So we can identify the type of the object through the constructor attribute (for example, the Animal type in this example), that is, use instanceof to verify.

Of course, using the constructor is not perfect, the main problem with using the constructor is that each method is created in each instance, that is, when we create the Animal object, the method inside is actually an instance of the Function object, that is, the same as:

This.say = new Function (console.log (‘I am a ‘+ this.type);

This leads to the creation of multiple instances of multiple function objects, which obviously increases memory consumption, and in order to solve this problem, we introduce the prototype mode.

3, prototype model

In Figure 1, we have found that each object, whatever the method is created, has a __proto__ attribute, which is the key to the connection of the prototype chain. In the last section, we have 4 steps to perform the operation of the new, of which second steps are based on the code. It is an assignment operation, that is, dog.__proto__ = Animal.prototype, which can be seen through console printing:

The way to create prototype patterns is to adopt such a form:

Function Animal () {} =’WangWang’;

Animal.prototype.type =’dog’;

Animal.prototype.say = function () {

Console.log (‘I am a ‘+ this.type);


Var dog = new Animal ();

The difference between prototype mode and constructor mode can be seen in the following diagram:

Constructor pattern:

Prototype mode:

From the above two pictures, we can see the advantages and disadvantages of the prototype, and how to improve it. Can the integration of the two make full use of the advantages of the two? If you think so, that means you are right. This is the 3.2 subsection.

In addition to the prototype assignment operation, we prefer to use object literal or new keyword to manipulate the prototype. But the use of object literal or new key words has a very important knowledge point: whether using object literal or new keyword, it is to create a new object to replace the original prototype object.

Is it abstract? A picture to tell you the truth:


Function Animal () {}

Animal.prototype = {{



Say: function () {

Console.log (‘I am a ‘+ this.type);



Var dog = new Animal ();


Function Species () { =’WangWang’;

This.type =’dog’;

This.say = function () {

Console.log (‘I am a ‘+ this.type);



Function Animal () {}

Animal.prototype = new Species ();

Var dog = new Animal ();

The prototype pattern diagrams of the two ones are illustrated.

So precisely because of this overridden effect, when you use this method, you must pay attention to whether the prototype object is still the original prototype object.

Both of the two have the advantages. How do they use the combination? What does the prototype chain have to do with the prototype?

Look at the next chapter: the past and present of the prototype and prototype chain of JavaScript (two).

The front end is from the beginning to the re – entry – Asynchronous

What is asynchronous? That’s about the language of JavaScript.

Born in 1995, JavaScript was designed as a single thread as the script language of browser at that time. That is to say, it can only do one thing at the same time. Why is it like this? Imagine how the browser should respond when the JavaScript is multithreaded, and the user adds content to a node and deletes the node in a thread.

JavaScript solves the complex synchronization problem in browsers with single thread, but a single thread must introduce a concept, that is, the task queue. In a civilized society, there are many demands and few items, so we must queue up. When a task ends, the next task begins. Does it seem perfect, but is there any defect? Yes, we have. For example, when you go to your favorite noodle shop, you find a long queue at the door, but the seats inside are empty. The process of this noodle shop is: customer order, small second and down – customers and small second eyes stare eyes – the kitchen gives the food to the second, the second gives the customer – the customer comes to the table with the food – the end.

Such a process is the most important place is the customer and small eyes stare in the stage, time is wasted on it, so a long queue of people are not satisfied. How should we optimize it?

After discussing with the noodle shop owner, this optimization scheme is given.

Customer order, small second and down – small two give a brand to customers, representing the number of customers – customers sit on the table and start to chat with the mobile phone – the kitchen to the second, the second according to the number to the customer – end.

As soon as the boss heard this well, there were fewer people waiting in line, and the efficiency of the second grade was higher, so he invited me to eat the best beef noodles they had.

This is asynchronous, the time-consuming operation is executed elsewhere, and the result is put back to the queue after execution. In this way, other operations can be done at this time, rather than blocking here. In other words, how does JavaScript realize asynchronous? It’s the callback function.


The callback function is called callback in English, and it is simply to call the callback function after a task is executed, and the callback function can get the result of this task. In the JavaScript world, there is a callback function everywhere. The simplest is the setTimeout method.

SetTimeout (() => alert (‘1000 millisecond after the emergence of me! “), 1000)

The above code is actually waiting for 1000 milliseconds, (=&gt); alert (‘1000 milliseconds) after me! ‘)

It’s the callback function.

The callback function is easy to use and well understood, but how to use multiple asynchronous data together? What is the necessary condition for an asynchronous task? What is the result of another asynchronous task? Some people began to write nested callback, commonly known as the callback of Pyramid.

Fun1 (function (value1) {)

Fun2 (value1, function (Value2) {

Fun3 (Value2, function (value3) {

FUN4 (value3, function (value4) {

/ /…





This code is very common in high frequency asynchronous processing such as node. It’s very well written. After a day, you look back at your code, what are you writing about?

At this point, we will embark on a road to find the best solution for asynchronous processing of front-end. As the title says, you think you are getting started. Actually, there is another door in front of you, commonly known as Moore’s law.

EventProxy and publish / subscribe patterns

As the author of the library, Pu Ling

To a great extent,

There is no such thing as deep nesting of callbacks in the world. Jackson Tian

There are no nested callbacks in the world.

. – fengmk2


It is to solve the nesting.

If you want to get data from several addresses asynchronously and process all the data after getting all the data, the simplest way to write a counter in addition to the callback to Pyramid is to write a counter.

Var count = 0;

Var result = {};

$.get (‘http://demo1’, function (data) {

Result.data1 = data;


Handle ();


$.get (‘http://demo2’, function (data) {

Result.data2 = data;


Handle ();


$.get (‘http://demo3’, function (data) {

Result.data3 = data;


Handle ();


Function handle () {

If (count = = = 3) {

/ / follow up operation result



After the use of EventProxy:

Var proxy = new eventproxy ();

Proxy.all (‘data1_event’,’data2_event’,’data3_event’, function (data1, data2, data3) {

/ / follow up operation result


$.get (‘http://demo1’, function (data) {

Proxy.emit (‘data1_event’, data);


$.get (‘http://demo2’, function (data) {

Proxy.emit (‘data2_event’, data);


$.get (‘http://demo3’, function (data) {

Proxy.emit (‘data3_event’, data);


This is a typical event publish / subscribe mode. Let’s throw away the code and start with what is publish / subscribe mode.

Publish / subscribe mode is also called observer mode. The publish / subscribe mode defines a one to many dependency relationship that allows multiple subscribers to listen to a topic object at the same time. The subject object will notify all subscriber objects when their state changes, so that they can update their status automatically.

In a word, your favorite chef’s chef returned home to marry a daughter-in-law, and the beef noodles made by the new chef did not suit your taste, so you called the restaurant every day to ask the old chef to come back. The boss could not bear it any more. Every day he received a similar phone call to get mad and asked me how to deal with it. I said simply, write down the phone call of everyone who wants to eat the beef chef, and tell them that the old chefs come and text them. So the boss’s phone was finally quiet.

The above example of mental retardation is publish / subscribe mode, which is often encountered in our daily life. The boss is the publisher, the customer is the subscriber, the customer subscribes the information which the old chef comes back, when the old chef comes back, the boss releases this news to the customer. What are the benefits of a publish / subscribe pattern? It can decouple logic, publishers need not care about the specific business logic of subscribers, and do not care about how many subscribers, they can deliver messages to subscribers. Subscribers do not have to ask the publisher if they have any news, but they will get the message he wants at the first time.

Back to EventProxy, proxy.emit (‘data1_event’, data) in the above example.

It’s the publisher, proxy.all

The subscriber is the subscriber, but EventProxy’s all API can subscribe to multiple messages at the same time and process all the messages, which is an extension of the common publish / subscribe pattern.

The specific implementation principle of EventProxy comes from the event module of Backbone, and interested GitHub can go to it.

Check it out.


What is promise? The Chinese meaning of promise is commitment.

I remember that my first contact with promise was in my first angular project, when the code of the project contained many promise.then ().Then ().

I don’t know what it means, I think it’s high-end. At that time, leader and I said, promise is very simple, that is, give you a commitment first, and then finish the next step. It always passes a promise object.

At that time, I naively assumed that promise only had angular, and later discovered that $q in angular was just one of the promise implementations. The entire code base of AngularJS relies heavily on Promise, including the framework and the application code you write with it.

Go back to the point. Many third party libraries have implemented promise, such as when, $q for angular, etc. They follow the same specification: Promises/A+

. Now in the ECMAScript 2015 specification, JavaScript supports native promise (of course, if you want compatibility), use Babel.

The so-called Promise is a container that holds the result of an event that will end in the future (usually an asynchronous operation). A promise may have three states: incomplete, completed, and failed. The state of a promise can never be completed or transferred to completion or failure. It can not be converted backward, and has been completed and failed. At the same time, it is important to note that promise objects are passed between them.

Take a chestnut: you ordered a meal at the noodle shop, and Xiao 2 gave you a meal card. This card is no use for you. You can neither eat nor sell money, but you have to pay for the meal, which is an unfinished state. When the kitchen is ready, Xiao two will take the beef noodle for your meal card, and you get the beef noodles you want to eat. This is the state that has been completed. The beef in the kitchen is sold out, and the second runner comes to apologize to you for letting you change a bowl of noodles or withdraw your money, which is the failure and the handling after failure. Failure has not been directly converted to completion. Do you want to change a bowl or go through the promise process, which is completed and failed and can not be converted to each other.

After knowing the theory of promise, let’s look at the original promise:.

Var promise = new Promise (function func (resolve, reject) {

/ / little two promises to give you a bowl of beef noodles

If (success) {

/ / resolve beef noodles for you

Return resolve (data);

} else {

/ / the kitchen beef is sold out, reject made a mistake for you.

Return reject (data);



Promise.then (function (data) {)

/ / / success, eat it

Console.log (data);

}, function (ERR) {

/ / failure, I want to refund!


The variable promise here is an instance of Promise’s object. If there is no error after the logic process, you can get the beef noodle in the first callback function in then, and if there is a mistake, you will do the wrong processing in the second callback function in the then.

Promise also provides a Promise.all ()

The method is similar to the all of EventProxy, but it returns a promise object. When all promise objects are successful, they will enter the completed state. If one of the promise objects has failed, Promise.all

It will also enter a state of failure. You can view in detail.

The biggest advantage of promise is that it solves the problem of deep nesting of callbacks through chain calls. It looks elegant and easy to understand and use. But do you think this is all about JavaScript asynchronous programming?

The CO of Generator and TJ

The Chinese meaning of Generator is a generator. In the JavaScript world, functions are not suspended after being executed, only the state of “invoked” and “not invoked”. What happens when the function can be suspended?

Generator is the function that can be suspended. Its essence can be understood as a special data structure. It’s more than a normal function.

Number, *

The number appears between the keyword function and the function name, if it is an anonymous letter.

The front-end rendering accelerates – Big Pipe


The first screen rendering speed is always a pain point on the front end

From the most open, the direct static resource server returns the resource file directly to the CDN distribution file, then to the server rendering technology. No one is not to get the best experience for the user.

Big Pipe is an accelerated first screen loading technology adopted by Facebook, which can be clearly felt on the front page of Facebook.

brief introduction

It looks as if it’s the same as Ajax

First of all, we need to know that Ajax is just another common HTTP request. The process of a complete HTTP request is

DNS Resolving -> TCP handshake -> HTTP Request -> Server -> Server;

The entire network link has spent quite a lot of time.

Big Pipe only needs to use one connection without additional requests.

The technology behind Big Pipe is not really complex, and the server is passed on to a browser without a closed < body> at this point, the browser will render the DOM that has been received (if there is a CSS, also render). But, at this point, the TCP connection has not been disconnected, < body> the server is not closed yet, the server can follow. Continue to push more DOM to browsers, or even < script>

In this way, the browser can take a page without data (the corresponding data display module to display the load in the load), take the data from the database at the same time, and then push the < script> the tag, and put the data in it. After the browser is received, it can replace the corresponding data.

The difference from the server rendering

Server rendering has a lot of similar places with Big Pipe. It also gets data from the server, fills it into the web DOM and returns to the customer. But the biggest difference is that Big Pipe can return a page to the user before getting the data to reduce the waiting time so as to prevent the data operation from blocking too long, and keeping it all the time. A blank page is given to the user.

The code used in the example of this article

The whole project can be seen as follows:

‘use strict’;

Const koa = require (‘koa’);

Const Readable = require (‘stream’).Readable;

Const co = require (‘co’)

Const app = koa ();

Const sleep = MS => new Promise (R => setTimeout (R, MS));

App.use (function* () {()

Const view = new Readable ();

View._read = () => {};

This.body = view;

This.type =’html’;

This.status = 200;

View.push (`

< html>

< head>

< title> BigPipe Test< /title>

< style>

#loader {

Width: 100px;

Height: 100px;

Border: 1px solid #ccc;

Text-align: center;

Vertical-align: middle;


< /style>

< /head>

< body>

< div id=" loader" >

< div id=" content" > Loading< /div>

< /div>


CO (function* () {()

Yield sleep (2000);

View.push (`

< script>

Document.getElementById (‘content’).InnerHTML =’Hello World’;

< /script>


View.push (‘< /body> < /html> “);

View.push (null);

}).Catch (E => {});


App.listen (5000);

Android application performance optimization – startup acceleration

In the recent study of Android performance optimization, the first to solve the problem of jumping Caton when opening a web page in the application, trying to introduce third party WebView components, but introducing another problem, the initialization of the third party WebView component is placed in the Application, which leads to the delay of the App startup time. Long. Today, we will talk about how to optimize acceleration from two aspects of Application and Activity.

The start time of a.Application acceleration App is the time the user clicks on app icon to the first interface of the app to give the user the time it takes, shortening this time and quickly displaying the first interface to the user, which can greatly improve the user’s experience. There are two main ways to optimize Application, one is to reduce the execution time of the onCreate method in Application, and one is to speed up the first interface with theme and Drawable.

1. reduce the execution time of the onCreate method

Using Android studio to build an application, we will find that the start speed is very fast, but as the complexity of the application increases, the integration of the third party components is increasing, and the initialization of the third party components is increasing in onCreate. This is a clear discovery that the start of the App is carton, the first interface is presented. Before the white screen or black screen time increased, this is due to the onCreate execution time is too long. To solve this problem, IntentService can be used to handle time-consuming initialization operations.

The code of IntentService is as follows:

Public static void start (Context context) {Intent intent = new Intent (context, DwdInitService.class);};}

@Override protected void onHandleIntent (Intent intent) {if (intent! = null) {final String action = {}; {} {};}}}

In Application, start can

DwdInitService.start (this);

A X5WebView initialization operation is made, and the effect is quite obvious.

2. optimize the presentation of the first interface

As I mentioned earlier, when a App starts, there will always be a white screen or a black screen. It is particularly bad from the user experience. How to eliminate this white screen? Here we can use custom theme and Drawable to solve this problem. Here we use a simple demo as a case: the following renderings

Layout code, as follows

< ImageView android:layout_width=" wrap_content" android:layout_height=" wrap_content" android:layout_gravity=" center" android:layout_marginBottom=" 24dp" android:src=" @drawable/ic_launcher&;

The code is simple, but every time you start, you will find that there will be a white screen before the page is displayed. Now the code is transformed as follows

A. defines a Drawable of loading.xml

Setting up the background and logo pictures here defines a theme, and windowBackground sets the background to loading.xml.

& lt; style name=& quot; Theme.Default.NoActionBar& quot; parent=& quot; @style/AppTheme& quot; & plurality; dialectical; dialectical; dialectical; dialectical;

C. sets the defined theme to LoadingActivity

Ok, completed, now start App, white screen is missing, user experience has also been improved.

Optimal acceleration of two.Activity

After entering App, the speed of the jump between pages is also an important part of the user experience, such as opening an embedded web page, after clicking the trigger button, a card will appear after the jump to the past.

The optimization of a Activity is also to reduce the execution time of the onCreate method. The onCreate method often consists of two parts. First, setContentView () is used to implement the layout, the first is to initialize and fill data in onCreate.

The second point is easier to understand, and the time consuming data reading and computing work is minimized in onCreate, and the asynchronous callback can be used to reduce the occupancy of the UI main thread.

Now, from setContentView, each of the controls in the layout needs to be initialized, arranged, and drawn, which are mostly time consuming operations to reduce the display speed. And in the case of no time-consuming data manipulation in onCreate, monitoring setContentView () through the TraceView tool almost takes up 99% of all time from the beginning of onCreate () to the end of onResume ().

Reduce the time spent on setContentView:

1. reduce layout nesting level

A. uses relative layout

Reduce the use of linear layout, use relative layout as far as possible, and reduce nesting levels. Nested multiple LinearLayout instances that use layout_weight properties will cost more, because each of the sub layouts will measure two times; although the relative layout is tedious, it can reduce the nesting level and reduce the drawing time.

B. use

Label merge layout


The label merge layout reduces the layout level, but the merage tag works only for the FrameLayout layout, and the parent of the Activity content view is a FrameLayout. The merge tag code is not used as follows: the code using the merge tag.

After the merge tag is used, the layout level is reduced accordingly. C. controls its own properties by controlling its own properties, reducing nesting levels, such as common linear arrangement menu layout, as follows

The code is implemented in LinearLayout as follows:

& lt; LinearLayout android:layout_width=& quot; match_parent& quot; android:layout_height=& quot; 62dip& quot. EW android:layout_width=& quot; wrap_content& quot; android:layout_height=& quot; wrap_content& quot; android:src=& quot. Out_width=& quot; match_parent& quot; android:layout_height=& quot; wrap_content& quot; android:layout_marginLeft=& quot; 15dp& Ot; android:textSize=& quot; 18sp& quot; /& gt; & lt; /LinearLayout& gt;

The drawableRight code for using the properties of TextView is as follows

& lt; TextView android:id=& quot; @+id/my_order& quot; android:layout_width=& quot; match_parent& quot. Quot; 15dip& quot; android:gravity=& quot; center_vertical& quot; android:paddingLeft=& quot; 28dip& quot;

The amount of code and nesting levels are reduced accordingly, and the effect is perfect.

2. using ViewStub delay expansion

ViewStub is a lightweight and invisible view. When needed, it can be used to postpone the expansion of the layout in your own layout. It is also a way to expand the layout by inflate when you need to expand and expand the layout in onResume by setting the flag bit.

Features: (1).ViewStub can only be Inflate once, and then ViewStub objects will be emptied. According to the sentence, if a layout specified by ViewStub is Inflate, it will not be controlled by ViewStub again. (2).ViewStub can only be used for Inflate layout file instead of a specific View. Of course, View can be written in a layout file. Usage scenario: (1) during the running of a program, a layout will not change after Inflate, unless it is restarted. (complex layout) (2). To control display and hide is a layout file instead of a View. Case: after optimization with ViewStub, the expansion time can be reduced from 1/2 to 2/3.

Use the code to expand the layout in the onCreate () method.

ViewStub viewStub = (ViewStub) findViewById (; viewStub.inflate ();

Communication between Android components

First, let’s sort out the ways in which we communicate between different components in Andrew.

(Tips: below, in addition to file storage and ContentProvider, generally refers to communication within the same process. If you want to achieve cross process communication, it also needs the help of Messenger or AIDL technology, followed by a detailed introduction of time, temporarily not discussed).

Mode 1: use Intent to pass the value: (between Activity and Activity)

Value example:

Intent intent=new Intent (); intent.putExtra (" extra" " Activity1"); intent.setClass (Activity1.this, Activity2.class); startActivity;

Value example:

Intent intent=getIntent (); String data=intent.getStringExtra (" extra"); TextView tv_data= (TextView) findViewById (; tv_data.setText;

Mode two: use Binder to transmit values (between Activity and Service).

1. define Service

In the Service, define an internal class that inherits from Binder, passing this class, passing the object of the Service to the required Activity, so that the Activity can call the public method and property in the Service, as follows:

Public class MyService extends Service {/ / instantiate the Binder class that you define.

Private final IBinder mBinder = new MyBinder ();

Private String mStr = " I am the knight ";

/ * * * * *

* Custom Binder class, internal class, through this class, let Activity get the object of Service.

* * /

Public class MyBinder extends Binder {

MyService getService () {

/ / return the Service object associated with Activity, so that in Activity, some common methods and common attributes in Service can be invoked.

Return MyService.this;




Public IBinder onBind (Intent intent) {

Return mBinder;


/ * * public method, Activity can make calls * /

Public String getStr () {

Return mStr;


2.Activity binding Service

It is to get the MyService object through the getService of IBinder, and then call the Public method. The code is as follows:

Public class MyBindingActivity extends Activity {

/ / custom Service

MyService mService;


Protected void onCreate (Bundle savedInstanceState) {

Super.onCreate (savedInstanceState);

SetContentView (R.layout.main);



Protected void onStart () {

Super.onStart ();

/ / bind to Service, and call the //onServiceConnected method in mConnetion after binding.

Intent intent = new Intent (this, MyService.class); bindService (intent, mConnection, Context.BIND_AUTO_CREATE);



Protected void onStop () {

Super.onStop ();

UnbindService (mConnection);


* * defines ServiceConnection, which is used to bind Service * /

Private ServiceConnection mConnection = new ServiceConnection () {


Public void onServiceConnected (ComponentName className,

IBinder service) {

/ / MyService has been bound to the IBinder object, calling the method to get the MyService object, then you can call the public method inside.

MyBinder binder = (MyBinder) service;

MService = binder.getService ();



Public void onServiceDisconnected (ComponentName arg0) {



Way three: use Broadcast broadcast transmission value

In fact, it uses Broadcast’s sending and receiving to realize communication.

Send an instance of Broadcast:

Static final String ACTION_BROAD_TEST = "" / / / / / / / / / / / / send Intent mIntent = new Intent (ACTION_BROAD_TEST);

SendBroadcast (mIntent);

Receive the Broadcast instance:

/ / dynamically register broadcast public void registerMessageReceiver () {

MMessageReceiver = new MessageReceiver ();

IntentFilter filter = new IntentFilter ();

Filter.addAction (ACTION_BROAD_TEST);

RegisterReceiver (mMessageReceiver, filter);


Public class MessageReceiver extends BroadcastReceiver {


Public void onReceive (Context context, Intent intent) {

/ / TODO Auto-generated method stub if (intent.getAction ().Equals (ACTION_BROAD_TEST)) {





Mode four: use Application, SharePreference, file storage, database, ContentProvider and so on.

It is to use Application to store some data in a longer life cycle for different activity and other read and write calls, but it is not safe, Application is likely to be recycled, SharePreference and file storage and database are basically stored in the corresponding files, without discussion

Mode five: use the interface:

It is to define an interface that needs to be concerned with the place of the event to implement the interface. Then the event triggered places to register / unregister the controls that are interested in the event. It is the observer pattern, and the problem is obvious, which is often more coupled between different components, and more and more interfaces are also troublesome, space reasons, and unspecific expansion.

To sum up, all kinds of communication methods are more or less problems, such as the way five, the coupling is more serious, especially when the interface is more and more, such as the form of broadcasting, when activity and fragment need to interact, it is not suitable, so we need to use a more simple EventBu. S to solve low – coupling communication between components

Mode six: EventBus:

EventBus class library introduction

EventBus is an optimized Android system class library in publish / bus mode.

Simplifying the communication between components

Decoupling of Event’s senders and receivers can work well in Activities, Fragments, and background threads to avoid complex and error prone dependencies and lifecycle issues that make your code more concise and faster class library smaller (< 50K jar) has been used in greater than 100000000+ already installed Apps. There are some advanced features such as delivery threads, subscriber priorities and so on.

EventBus uses three steps

Define events: public class MessageEvent {/ * Additional fields if needed * /}

Prepare subscribers: eventBus.register (this);

Public void onEvent (AnyEventType event)} / * Do something * /};

Post events: (event);

The following is an example of using EventBus on the Internet:

EventBus’s problem?

Of course, EventBus is not a panacea, and there are some problems in the process of using, for example, because of the convenience of use, it will cause misuse at some time, instead of making code logic more chaotic, for example, some places will circulate to send messages, and so on. In the later chapter, we will carefully study whether there is a better alternative to Eve. The way of ntBus, such as RxJava?

Android Service active attack and defense

The company launched a project in June of 15, which is similar to the application of travel software to upload real-time latitude and longitude, which involves the problem of backstage Service security. Because of the business scene, the problem of keeping alive is actually encountered by many developers, and many people ask questions on the Internet, such as: how to make the Android program run in the background, like QQ, WeChat and not killed? There are many respondents, but there are several ways to achieve the desired effect. Next, I will talk about the various plans combined with my own development experience.

One, why do you want to live?

The source of the survival is because we hope that our service or process can run in the background, but there are all kinds of reasons that cause our hopes to be disillusioned. The main reasons are as follows: 1, Android system recovery; 2, mobile phone manufacturer’s custom management system, such as power management, memory management, and so on; 3, third square. Software; 4, user manual end.

Two. The means of keeping alive

1. Modify the return value of the onStartCommand method of Service

Can the service be restarted when the service is aborted? The general practice is to modify the return value and return START_ STICKY. OnStartCommand () returns an integer value to describe whether the system will continue to start the service after killing the service. There are three kinds of return values:

START_STICKY: if the service process is dropped by kill, the state of the reservation of the service is the start state, but the delivery intent object is not retained, and the system will then try to recreate the service.

START_NOT_STICKY: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, the system will set it to a started state, and the system will not automatically restart the service.

START_REDELIVER_INTENT: when using this return value, if the service is dropped by the exception kill after the execution of the onStartCommand, it will automatically restart the service after a period of time and pass the value of the Intent into it.

[feasibility] see the explanation of the three return values, which looks like the hope of keeping alive. Is it possible to achieve our goal by setting the return value to START_ STICKY or START_REDELIVER INTENT? But in fact, after the actual test, only a few cases and models can be restarted except for the lack of memory.

2, Service onDestory method restarted

After onDestory sends a broadcast and receives the broadcast, it restarts the Service.


Public void onDestroy () {

StopForeground (true);

Intent intent = new Intent ("");

SendBroadcast (intent);

Super.onDestroy ();


[feasibility] under 4 reasons of appeal, Service was killed, and the APP process was basically dry out, and neither the onDestroy method was executed, so there was no way to start the service.

3. Improve the Service priority

Increase priority in Service registration

< service android:name=" com.dwd.service.LocationService" android:exported=" false" >

< intent-filter android:priority=" 1000" > < /intent-filter>

< /service>

[feasibility] this method is invalid for Service, and service has no such attribute.

4. Front desk service

The front desk is considered to be a running service known to the user. When the system needs to release the memory, it will not kill the process, and the front service must have a notification in the status bar.

NotificationCompat.Builder NB = new NotificationCompat.Builder (this);

Nb.setOngoing (true);

Nb.setContentTitle (getString (R.string.app_name));

Nb.setContentText (getString (R.string.app_name));

Nb.setSmallIcon (R.drawable.icon);

PendingIntent pendingintent = PendingIntent.getActivity (this, 0, new Intent (this, Main.class), 0);

Nb.setContentIntent (pendingIntent);

StartForeground (1423, ());

[feasibility] this method has a certain effect on preventing system recovery and can reduce the probability of recovery, but the system will be dropped by kill in the case of very low memory, and it will not be restarted. And the cleaning tool or manual forced end, the process will be hung up and will not be restarted.

5. Process Guardians

There are two ways to implement this scheme, one is dual service or two processes, starting two services and listening to each other. One is hung, and the other will start up the service. Another way is to pull a sub process from the native layer to the main process in fork.

[feasibility] first way, the two process or the two services will hang up with the application process, so it will not start. When the second ways are killed, it can really wake up, but the Android 5 and above will put the fork out of the process in a process group. When the program master process is hung up, the whole process group will be killed, so it can not be awakened in the Android5.0 and above system by the way of fork.

6, monitoring system broadcasting

By listening to some broadcasts of the system, such as mobile phone boot, Jie Suoping, network connection status change, application status change, and so on, then determine whether Service is alive, if otherwise Service is started.

[feasibility] after the 3.1 version of the Android system, in order to strengthen the system security and optimize the performance of the system broadcast restrictions, the application monitoring mobile phone boot, Jie Suoping, network connection status change and other regular system broadcast after the android3.1, the first installation is not started or the user forced to stop, the application can not be monitored. Hear. In addition, the latest Android N canceled the network switch broadcast, it is really sad, no one can use it.

7. Interoperability between applications

Use the different app processes to use radio to wake up each other, such as Alipay, Taobao, Tmall, and other Ali, such as the app, if open any of the applications, the other Ali app will wake up, in fact, the BAT system is almost all. In addition, a lot of push SDK will also wake up app.

[feasibility] multiple app application wake-up calls need to be related to each other, so that the SDK application wakeup can not be waken up when the user is forced to stop.

8, activity a pixel point

After the application is back to the backstage, another page with only 1 pixels stays on the desktop to keep the front desk and protect himself from the backstage cleaning tools to kill. This scheme is the practice of millet exposure to Tencent QQ.

[feasibility] will still be killed.

9. Install APK to /system/app and transform it to system level application.

[feasibility] this method is only suitable for pre installation applications, and ordinary applications can not be transformed into system level applications.

10, using the account and synchronization mechanism provided by Android system.

Create an account in the application, then open auto synchronization and set the synchronization interval, and use synchronization to wake up app. After the account is established, the account number can be seen in the mobile phone setting – account. The user may delete the account or stop the synchronization, so it is necessary to check whether the account can synchronize regularly.

/ / establish account number

AccountManager accountManager = AccountManager.get (mContext);

Account riderAccount = new Account (mContext.getString (R.string.app_name), Constant.ACCOUNT_TYPE); accountManager.addAccountExplicitly (riderAccount, mContext.getString (R.string.app_name), null); Lver.addPeriodicSync (riderAccount, Constant.ACCOUNT_AUTHORITY, new Bundle (), 60);

/ / open synchronization

ContentResolver.setSyncAutomatically (riderAccount, Constant.ACCOUNT_AUTHORITY, true);

[feasibility] except for Meizu mobile phones, this program can successfully wake up app, no matter how it is killed. In addition, the millet phone needs to close the Shenyin mode. This plan has been put forward for nearly a year, and now many developers are using it.

11, white list

Put the application into the white list of mobile phones or security software to ensure that the process is not reclaimed by the system, such as WeChat and QQ in the white list of millet, so WeChat will not be dried up by the system, but the user can stop.

[feasibility] the success rate of this scheme is good, but users can still manually kill the application. In addition, if the user base is not big enough, the application developers will go to the big manufacturers to talk about it. The domestic Android mobile phone manufacturers are too numerous and too costly. But when the installed capacity and the active users reach WeChat, maybe the manufacturer will take the initiative to add your application to the white list.

I applied 4, 6, 7 and 10 of these four programs to ensure that 90% of the mobile phones were successfully protected. Service is actually a war and defense war, the application in order to demand the need to realize the backstage operation, but the system for performance security and other factors to consider the back of the backstage service. Moreover, Bao Huo is also a protracted war. Maybe the current feasible plan was destroyed by Gordon one day. There is no end to learning, and exploration never stops.

Presto in the point of my use

Reasons for use:

Point me to big data developers, BI colleagues need to use hive to query various kinds of data every day, more and more reports business is used to hive. Although the CDH cluster has deployed impala, most of our hive tables use ORC format, and impala is unfriendly to ORC support. Before using presto, I would like to use big data to use hive and go to MapReduce to query related tables. The efficiency of query is low.

In the early days of presto, operation and maintenance set up a 3 node experience environment, queried hive through presto, and experienced simple SQL queries, and found that the efficiency was very good.

Our existing solutions are unable to query both historical and real-time data at the same time. Once in demand, BI puts forward the problem that we need our large data to solve hive and MySQL interactive queries. We used spark, Presto and other tools in the big data team to test, and found that Presto is relatively easiest to use. It is also suitable for BI’s colleagues. After deliberating internally, he decided to vigorously promote presto.

We built a Presto cluster on the Hadoop node and configured 1coordinator, 3worker. Later, with the increasing use of Presto business, now it has been expanded to 7worker. The memory of worker node also increased from original 8G to 24G.

Presto introduction:

Presto is an open source distributed SQL query engine, which is suitable for interactive analysis and query, and supports massive data. It is mainly to solve the interactive analysis of commercial data warehouse and to deal with low speed. It supports standard ANSI SQL, including complex queries, aggregation (aggregation), connection (join) and window function (window functions).

Presto supports online data query, including Hive, Cassandra, relational database, and proprietary data storage. A Presto query can merge data from multiple data sources and analyze them across the entire organization.

Working principle:

The running model of Presto is essentially different from that of Hive or MapReduce. Hive translates queries into multistage MapReduce tasks and runs one after another. Each task reads the input data from the disk and outputs the intermediate results to the disk. However, the Presto engine does not use MapReduce. It uses a custom query and execution engine and response operators to support SQL syntax. In addition to the improved scheduling algorithm, all data processing is carried out in memory. Different processing terminals constitute the pipeline processed through the network. This will avoid unnecessary disk read and write and additional delay. This pipelined execution model runs multiple data processing segments at the same time, and once the data is available, the data will be passed from one processing segment to the next processing segment. Such a way will greatly reduce the end to end response time of various queries.

Use the scene:

1, commonly used hive queries: more and more colleagues have been querying hive through presto. Compared with the MR hive query, the efficiency of Presto has been greatly improved.

2, data platform inquiries hive through Presto to do business display.

3, union of Cobar. Cobarc has 5 segments. There are 5 pieces of data in the data. If you need to query the cobarc, you need to query 5 pieces of data separately and then hand it together. The efficiency of inquiry is low. Cobarb has 8 libraries, and the data exists in 8 libraries. There is the same problem with cobarc. Through presto, the union of 5 slices can be combined to enhance query efficiency and simplify query.

The interaction between 4, hive and mysql. In the past, hive’s historical data and cobarc’s real-time data could not be join together. You can only query the data of the previous day or the data of the day. Using Presto can bring hive and cobarc’s table join together.

5, Kafka’s topic is mapped into tables; in existing Kafka, some topic data is structured JSON data. Through presto, the JSON is mapped into a table. You can use the SQL query directly. But it has not been applied to the scene for the time being;

6, at present, hue, Zeppelin and other web tools are used to operate Presto and support the export of result sets.