The blockchain is a revolutionary tecnology. It is a database of transactions structured in block. Every block contains the transactions performed into a interval of time set by the administrator of blockchain. One blockchain contains lot of blocks. One block contains a lot of transactions. This is the chain of blocks. Every block is chained with the previous and the next block with a cryptographic key. The blockchain is a network of nodes that have all the blocks in syncronous mode in real time. For this we talk about of a database decentralized, distribuitedand and encrypted.
The essential characteristics are :
- immutability
- trasparent
- consensus
- decentralized ledger.
The data inserted are immutable after that the block over the data are inserted is mined.
The transparency is the possibility that all the user can view all themselves data anytime.
The Consensus is the principle that rule the blockchain. For consensus review the rule of change the data inserted into the block. One method is Proof of Work that is based of a determinated percentual of miner that autorizate the transaction. Other method are Proof of Authority and Proof of Stake.
The evolution of ledger it concerns the passage from the centralized ledger how an authority manage the ledger at a network distribuited or decentralized ledger with the DAPP, decentralized application, where every node have the application without a centralized server.
The blockchain can be used how a public registry open to all, secure and immutable in the time.
The principal element are:
- node
- transaction
- ledger
- hash --> a function used for encrypt the transaction in way invertible
- Timestamp --> identifies date and time of transaction
The blockchain can be public, private(permissioned) or hybrid. Now we introduce the concept of miner. The miner is a node that manage the block and mine the block, chain the block at the blockchain. For this receive a remuneration.
A important news of the blockchain are Smart Contract.
The Smart Contract are application that are executed over the blockchain. An example of Smart Contract can be used in the Ethereum Virtul Machine, an ambient created over the ethereum blockchain where the developer can create the Smart Contract with the languages Javascript, Python, Solidity ecc.. The blockchain and the Smart Contract open the door to DAO, Decentralized Autonomous Organizations, a Organization that live on the program code(Smart Contract) and the ledger is in the blockchain.
Read more

CREATE PRIVATE NETWORK on Linux Ubuntu 16-04

In this post on show how you can create a private blockchain with Puppeth, a tool for manage an ethereum private network . From a genesis json file Puppeth create a blockchain network how you can add other node. Before you must install GETH, a library that manage ethereum network. Geth can be launched with an interactive Javascript Console, that provides a javascript runtime environment, with a javascript API to interact with your node.
Javascript Console API includes the web3 javascript Ðapp API and an additional admin API.
From terminal insert

sudo apt-get install ethereum

For view all the command line insert from terminal.

geth help

We create the Genesis.json below :
"config": {
"chainId": 135,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
"difficulty": "10",
"gasLimit": "2100000",
"alloc": {}

From your directory create 2 new directory in this example we use node1 and node2
Now we create 2 new accounts for the blockchain, remember the password inserted of the accounts.
geth --datadir ./node1/data account new view log

linux@linux:/media/linux/mecbar$geth --datadir ./node1/data account new

INFO [06-19|16:34:38] Maximum peer count ETH=25 LES=0 total=25

Your new account is locked with a password. Please give a password. Do not forget this password.


Repeat passphrase:

Address: {3e1cdb3f887ca74ee1fa0a9fc630e0e3a9f3fc0b}


Now repeat this command for node2

geth --datadir ./node2/data account new view log

linux@linux:/media/linux/mecbar$ geth --datadir ./node2/data account new

INFO [06-19|16:34:57] Maximum peer count ETH=25 LES=0 total=25

Your new account is locked with a password. Please give a password. Do not forget this password.


Repeat passphrase:

Address: {1cf568d39134e27c0236b9276f5a37764454d03b}


Now we use Puppeth for create the chain ...go....

puppeth view log
Puppeth ask you any questions that you must answer how name of the network, type of consensus
time for every mine block, account that you want pre-funded of ether because the transactions into the chain do not be free but you must pay gas.

linux@linux:/media/linux/mecbar$ puppeth


| Welcome to puppeth, your Ethereum private network manager |

| This tool lets you create a new Ethereum network down to |

| the genesis block, bootnodes, miners and ethstats servers |

| without the hassle that it would normally entail. |

| Puppeth uses SSH to dial in to remote servers, and builds |

| its network components out of Docker containers using the |

| docker-compose toolset. |


Please specify a network name to administer (no spaces or hyphens, please)

> mecbar

Sweet, you can set this via --network=mecbar next time!

INFO [06-19|16:35:18] Administering Ethereum network name=mecbar

WARN [06-19|16:35:18] No previous configurations found path=/home/linux/.puppeth/mecbar

What would you like to do? (default = stats)

1. Show network stats

2. Configure new genesis

3. Track new remote server

4. Deploy network components

> 2

Which consensus engine to use? (default = clique)

1. Ethash - proof-of-work

2. Clique - proof-of-authority

> 2

How many seconds should blocks take? (default = 15)


>Which accounts are allowed to seal? (mandatory at least one)

> 0x3e1cdb3f887ca74ee1fa0a9fc630e0e3a9f3fc0b

> 0x

Which accounts should be pre-funded? (advisable at least one)

> 0x1cf568d39134e27c0236b9276f5a37764454d03b

> 0x

Specify your chain/network ID if you want an explicit one (default = random)

> mecbar

INFO [06-19|16:36:00] Configured new genesis block

What would you like to do? (default = stats)

1. Show network stats

2. Manage existing genesis

3. Track new remote server

4. Deploy network components

> 2

1. Modify existing fork rules

2. Export genesis configuration

3. Remove genesis configuration

> 2

Which file to save the genesis into? (default = mecbar.json)


INFO [06-19|16:36:08] Exported existing genesis block

What would you like to do? (default = stats)

1. Show network stats

2. Manage existing genesis

3. Track new remote server

4. Deploy network components

> ^C


For connect the node between their we use bootnode, a tool for create an enode that we use in next commands.
From terminal
bootnode -genkey boot.key bootnode -nodekey './boot.key' -verbosity 7 -addr '' view log
linux@linux:/media/linux/cmecbar$ bootnode -nodekey './boot.key' -verbosity 7 -addr '' INFO [06-17|11:03:32] UDP listener up self=enode://6765484c7e3a61defd9b10138c20c13ce36e265fc8109ad0d48ff25b4 942aa9ab98e9166682c2db0889
d31ecc970cbc7b96b07c5b4fc680cbd4771ba7b6e76df@ TRACE[06-20|15:28:16] >> NEIGHBORS/v4 addr= err=nil TRACE[06-20|15:28:16] << FINDNODE/v4 addr= err=nil TRACE[06-20|15:28:17] >> PONG/v4 addr= err=nil TRACE[06-20|15:28:17] << PING/v4 addr= err=nil TRACE[06-20|15:28:19] >> NEIGHBORS/v4 addr= err=nil TRACE[06-20|15:28:19] << FINDNODE/v4 addr= err=nil TRACE[06-20|15:28:20] >> PONG/v4 addr= err=nil TRACE[06-20|15:28:20] << PING/v4 addr= err=nil TRACE[06-20|15:28:20] >> PING/v4 addr= err=nil TRACE[06-20|15:28:20] << PONG/v4 addr= err=nil Close
Now we init the genesis block ...into node1 and node2
geth --datadir ./data init ../mecbar.json view log

linux@linux:/media/linux/mecbar$ cd node1

linux@linux:/media/linux/mecbar/node1$ geth --datadir ./data init ../mecbar.json

INFO [06-19|16:36:54] Maximum peer count ETH=25 LES=0 total=25

INFO [06-19|16:36:54] Allocated cache and file handles database=/media/linux/mecbar/node1/data/geth/chaindata cache=16 handles=16

INFO [06-19|16:36:54] Writing custom genesis block

INFO [06-19|16:36:54] Persisted trie from memory database nodes=355 size=51.91kB time=1.400113ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B

INFO [06-19|16:36:54] Successfully wrote genesis state database=chaindata hash=70670a…72d934 INFO [06-19|16:36:54] Allocated cache and file handles database=/media/linux/mecbar/node1/data/geth/lightchaindata cache=16 handles=16

INFO [06-19|16:36:54] Writing custom genesis block

INFO [06-19|16:36:54] Persisted trie from memory database nodes=355 size=51.91kB time=1.560455ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B

INFO [06-19|16:36:54] Successfully wrote genesis state database=lightchaindata hash=70670a…72d934

linux@linux:/media/linux/mecbar/node1$ cd -


linux@linux:/media/linux/mecbar$ cd node2

linux@linux:/media/linux/mecbar/node2$ geth --datadir ./data init ../mecbar.json

INFO [06-19|16:37:17] Maximum peer count ETH=25 LES=0 total=25

INFO [06-19|16:37:17] Allocated cache and file handles database=/media/linux/mecbar/node2/data/geth/chaindata cache=16 handles=16

INFO [06-19|16:37:17] Writing custom genesis block

INFO [06-19|16:37:17] Persisted trie from memory database nodes=355 size=51.91kB time=1.462966ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B

INFO [06-19|16:37:17] Successfully wrote genesis state database=chaindata hash=70670a…72d934 INFO [06-19|16:37:17] Allocated cache and file handles database=/media/linux/mecbar/node2/data/geth/lightchaindata cache=16 handles=1

INFO [06-19|16:37:17] Writing custom genesis block

INFO [06-19|16:37:17] Persisted trie from memory database nodes=355 size=51.91kB time=1.359188ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B

INFO [06-19|16:37:17] Successfully wrote genesis state database=lightchaindata hash=70670a…72d934

linux@linux:/media/linux/mecbar/node2$ cd -


linux@linux:/media/linux/mecbar$ cd node1


Into node1 and node2 insert the password inserted before when we created the accounts into file a file for example password.txt for every node

Now we start the javascript console of our network. From terminal insert the geth command with the parameter hoe enode

geth --datadir "./data" --networkid 135 --port 30305 --ipcdisable --rpc --rpccorsdomain '*' --rpcapi="net,eth,web3,personal,admin,miner" --rpcport 8545 --bootnodes 'enode://8ea74489957a2dd ed29d1f40ead222561466f0be26063bbbf76aa11cb688b366936b59c134015bc6afc8f6e6030d7b30bbbc513f4759f0e c02f7474454ab6fd9@' --unlock 25b8a2686ffc5ae2a6de5cd08a6ab6ff7a6133d7 --password password.txt --mine --syncmode full console view log
linux@linux:/media/linux/mecbar/node1$ geth --datadir "./data" --networkid 135 --port 30305
--ipcdisable --rpc --rpccorsdomain '*' --rpcapi="net,eth,web3,personal,admin,miner" --rpcport 8545
--bootnodes 'enode://8ea74489957a2dded29d1f40ead222561466f0be26063bbbf76aa11cb688b366936b
--unlock 25b8a2686ffc5ae2a6de5cd08a6ab6ff7a6133d7 --password password.txt --mine
--syncmode full console

INFO [06-19|16:40:48] Maximum peer count ETH=25 LES=0 total=25

INFO [06-19|16:40:48] Starting peer-to-peer node instance=Geth/v1.8.11-stable-dea1ce05/linux-amd64/go1.10

INFO [06-19|16:40:48] Starting P2P networking

INFO [06-19|16:40:50] UDP listener up self=enode://75fdcbfaa6c5bb728effc2770b23069d13bea12f26eb697b489ae5cb40

INFO [06-19|16:41:06] Successfully sealed new block number=2 hash=32c60c…9baf49

INFO [06-19|16:41:06] 🔨 mined potential block number=2 hash=32c60c…9baf49

INFO [06-19|16:41:06] Commit new mining work number=3 txs=0 uncles=0 elapsed=1.911ms


On nede2 start the Javascript Console

geth --datadir "./data" --networkid 135 --port 30306 --ipcdisable --rpc --rpccorsdomain '*' --rpcport 8546 --bootnodes 'enode://8ea74489957a2dded29d1f40ead222561466f0be26063bbbf76aa11cb688 b366936b59c134015bc6afc8f6e6030d7b30bbbc513f4759f0ec02f7474454ab6fd9@' --syncmode full console

Ok, now our network is ready

Read more


This is the defination from ethereum.org " Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third-party interference. These apps run on a custom built blockchain, an enormously powerful shared global infrastructure that can move value around and represent the ownership of property. This enables developers to create markets, store registries of debts or promises, move funds in accordance with instructions given long in the past (like a will or a futures contract) and many other things that have not been invented yet, all without a middleman or counterparty risk. The project was bootstrapped via an ether presale in August 2014 by fans all around the world. It is developed by the Ethereum Foundation, a Swiss non-profit, with contributions from great minds across the globe."
We use the ethereum network for test our private network and the Smart Contracts. In the Ethereum blockchain we have the concept of Gas, the unit in which EVM resources usage is measured.one unit of GasUsed. Then GasUsed multiply GasPrice is how much cost executed a transactions in Wei. If your founds are not sufficients the transaction will not executed. This is the royalties for the remuneration of the miner. The GasUsed is the quantity of Gas necessary for eexcute the transactions and GasPrice is the cost of There are different network for test the blockchaion. We use Rinkeby network but you can use Ropsten or Kovan. The Smart Contract can be created with the Solidity programming language. For view the result of the transactions, the pending transactions, the smart contracts and blocks you must go to the following link https://rinkeby.etherscan.io/.
For testing the Solidity Smart Contracts we use the Remix web browser Ide to the following link https://remix.ethereum.org/.
Mist is a browser, a browser that use the Web3.js library and communicates with Ethereum network for transactions and Smart Contract on the blockchain. The Web3.js is connected with a Ethereum node with JSON RPC with connection HTTP o IPC. With the Web3js library we can operate on the blockchain with our application. Another method for operate in the blockchain is the Geth Javascript Console write in Go permit with the Web3js to operate in the blockchain.

Read more


Hyperledger is a project of open source blockchain started by the Linux Fondation for a tecnology development for business and commercial use. The Hyperledger project is composed from any Distribuited Ledger Tecnology, DLT, Framework and any tools. Here we talk about the DLT framework Hyperledger Fabric with the tool Hyperledger Composer. The DLT is a decentralized system for exchange of VALUE.
The fondamental element are:
- ASSETS - represents Value and can be tangible or intangible
- PARTECIPANT - user that can use the bolckchain
- TRANSACTIONS - The transaction that can be executed into the blockchain
- EVENT - The transaction can emit events
The blockchain how ethereum are public, all the user can view all the transaction, while the blockchain with hyperledger are permissioned network.
The administrator of network set the user that can access to network and the relative authentication and autorization.
The administrator know identities partecipant and manage the Access Control and the role assigned with restrictions to the action for every user.
With the access control the admin manage the transaction that a partecipant can view and execute, confidential transactions.
No miner is requested because transaction validations is done by administrator of network. No concensus with method how proof-of-work or proof-of-authority.
There aren't cryptocurrency and the network is programmable. The Chaincode automates the business process and the transactions executation.
We use Composer for creation and management of business network application, BNA, and javascript for coding transactions.
It offer tools for architects, development, administration and operations for business analyst.
Offer an object oriented language for defining the domain model of bna and support javascript applications.
In the web browser storage composer offer a playground application how simulate and test your BNA.
Another important tool is the Composer Rest Server that permit the connession between the web interface with the blockchain.
The Rest Server allows us to execute the CRUD(Create, Read, Update, Delete) operation on blockchain.
We can operate with Javascript for CRUD operation on assets, transactions and events, too.
We can create a Consortium creating a channel with two or more organization and more than peer.
Every organization has one Membership Service Provider (MSP) that manage the roles and access restrictions. org1 MSP of every organization can manage autority for their member, create and revoke the identities. Every Peer has a local MSP for manage local identity and every network has at least one MSP. The great organization have more MSP for example a Org1-MSP-National, Org1-MSP-International, Org1-MSP-Distribution ecc...
In the Multi-organization channel there is a MSP for the channel consisting in the MSP of the Organizaztion that are part of the channel. In the image below an example of a channel composed by Org1, Org2, Org3 and Org4. How you can see not all the peer partecipape of channel . The MSP of every Organization allow the peer to partecipate and role of every peer through the Identity. In the image below you can see the channel with the MSP.
msp The Identity is managed by way of certificates by Fabric CA a built-in certificate autority that create, update and delete the identities. How you can see in the Graph the file ca.org1.exemple.com contains tha CA autority and MSP. Into the Organization all node are not equal, there is a node Orderers, a node Peer and node Client. The node Peers is connect with the channel. There is a Peer Anchor and a node Peer Endorser. The endorser policy identifies the node Endorser that for every trnsactions verify if it is ok. after this verification the transaction is executed in the blockchain.
consensus process This is the Consensus process, it is composed in three phases. In the first, the proposal, the client asks execution of transaction to Endorser peer, it asks the test of transaction to test peers defined into the endorsement policy. This peers return the result of test to endorser and if the result are ok the process go on. In the second phase, the packaging, the endorser create the package with all the transaction that have successful to create a block that will go inserted into all peers included endorser peer. The third phase is the validation, the endorser send the block to all peers and verify and validation that the ledger is updated in all peers in the blockchain network.
In the image to our left we can see the client that send the transaction T1 to endorser which send the T1 transaction to peer P1, P2, P3 that then answer to endorser the result ot test. If all is allright the endorser send the transaction for execution and update the blockchain to all peer from P1 to Pn.
The Anchor peer receive request from client and data from orderers after align all peers of the organization. The Orderer peer ensure consistency of ledger data across nodes and order of transactions.
org The scalability of model is insured, every organization can avoid all peers that she want.Any peer of an organization can take part to other organization if permit granted by the network administrator. How you can see in next graph a Organization can take part to different channel. The chaincode can be managed with the languages go, java and javascript.
The state of assets is mutable because is the ledger that respresente the change of value of assets while the transaction log is immutable because it contains all the transactions done. Recreating the transactions log we recreate the state of assets.
For test hyperledger use Solo for one node but for production use KAFKA Apache for multi-user setup and connection all node.

Install Hyperledger Fabric and Composer

Prerequisits for install Hyperledger Fabric

Before you install hyperledger on your system you must have installed the following applications :
Nodejs with npm
Un IDE for example Visual Studio and install the Hyperledger Composer Extension for VSCode

for install Hyperledger Fabric and Composer on ubuntu machine in terminal you must launch the following command :

npm install -g composer-cli@latest

npm install -g composer-rest-server@latest

npm install -g generator-hyperledger-composer@latest

npm install -g yo

If you want you can install the playground web browser storage for test the BNA.
npm install -g composer-playground@latest

Now you must create the following directory
mkdir ~/fabric-dev-servers && cd ~/fabric-dev-servers

Launch the command for download file for start and stop hyperledger

curl -O https://raw.githubusercontent.com/hyperledger/composer-tools/master/packages/fabric-dev-servers/fabric-dev-servers.tar.gz
tar -xvf fabric-dev-servers.tar.gz

Launch the command for export file downloaded

cd ~/fabric-dev-servers
export FABRIC_VERSION=hlfv12

Now you can start the hyperledger Fabric

cd ~/fabric-dev-servers


mecbar@linux:~$ cd ~/fabric-dev-servers
mecbar@linux:~/fabric-dev-servers$ ./startFabric.sh
Development only script for Hyperledger Fabric control
Running 'startFabric.sh'
FABRIC_VERSION is set to 'hlfv12'
FABRIC_START_TIMEOUT is unset, assuming 15 (seconds)
Removing peer0.org1.example.com ... done
Removing ca.org1.example.com ... done
Removing couchdb ... done
Removing orderer.example.com ... done
Removing network composer_default
Creating network "composer_default" with the default driver
Creating couchdb ...
Creating ca.org1.example.com ...
Creating orderer.example.com ...
Creating couchdb
Creating orderer.example.com
Creating couchdb ... done
Creating peer0.org1.example.com ...
Creating ca.org1.example.com ... done

Congratulation ! Now you are ready to go wyth hiperledger !

mecbar@linux:~/$ ./restartFabric.sh
Starting ca.org1.example.com ... done
Starting couchdb ... done
Starting orderer.example.com ... done
Starting peer0.org1.example.com ... done

mecbar@linux:~/$ ./stopFabric.sh
Stopping peer0.org1.example.com ... done
Stopping orderer.example.com ... done
Stopping ca.org1.example.com ... done
Stopping couchdb ... done

Command for create the Peer Administrator Card

mecbar@linux:~/fabric-dev-servers$ ./createPeerAdminCard.sh
Development only script for Hyperledger Fabric control
Running 'createPeerAdminCard.sh'
FABRIC_VERSION is set to 'hlfv12'
FABRIC_START_TIMEOUT is unset, assuming 15 (seconds)
Using composer-cli at v0.20.0
Successfully created business network card file to
Output file: /tmp/PeerAdmin@hlfv1.card
Command succeeded
Deleted Business Network Card: PeerAdmin@hlfv1
Command succeeded
Successfully imported business network card
Card file: /tmp/PeerAdmin@hlfv1.card
Card name: PeerAdmin@hlfv1
Command succeeded
The following Business Network Cards are available:
Connection Profile: hlfv1
Issue composer card list --card < Card Name > to get details a specific card
Command succeeded
Hyperledger Composer PeerAdmin card has been imported, host of fabric specified as 'localhost'


mecbar@linux:~/MYPROJECT/dist$ yo hyperledger-composer
Welcome to the Hyperledger Composer project generator
? Please select the type of project: Angular
You can run this generator using: 'yo hyperledger-composer:angular'
Welcome to the Hyperledger Composer Angular project generator
? Do you want to connect to a running Business Network? Yes
? Project name: ProjectName
? Description: Hyperledger Composer Angular project
? Author name: mecbar
? Author email: mecbar@mecbar.com
? License: Apache-2.0
? Name of the Business Network card: admin@projectname
? Do you want to generate a new REST API or connect to an existing REST API? Generate a new REST API
? REST server port: 3000
? Should namespaces be used in the generated REST API? Always use namespaces
Created application!

Now the application is ready. You must start the Composer Rest Server and after launch the app with the command npm start and go to link localhost:4200 in the web browser.


namespace org.project.one.view
participant User identified by participantId {
  o String participantId
  o String firstName
  o String lastName

asset nameAsset identified by numberId {
  o String       numberId
  o DateTime     data
  o Double       field1
  o String       field2
  --> otherAsset    fieldId
asset otherAsset identified by fieldId {
 o String      fieldId
 o String      field4
transaction   nameTransaction {
 --> otherAsset    field 
 o String      newfield4
event primeEvent {
  --> otherAsset  field
  o String oldValue
  o String newValue


from Terminal after update the data into package.json and execute the startFabric command

cd dist
Create a BNA file
composer archive create --sourceType dir --sourceName ../
Install the Business Network Application
composer network install -a .\name_app@0.0.1.bna -c PeerAdmin@hlfv1
Start the BNA with the Card file
composer network start -c PeerAdmin@hlfv1 -n name_app -V 0.0.1 -A admin -S adminpw
Import the card that was generated

composer card delete -c admin@name_app
composer card import -f .\admin@name_app.card

View the list of the network apps for this card
composer network list -c admin@name_app
Verify if network working with ping command
composer network ping -c admin@name_app
Launch the Rest Server
composer-rest-server -c admin@name_app -n never
Go to localhost:3000 in web browser for view your application
Other command for the Composer CLI:
Network Amministrator add new partecipant
composer partecipant add --help
Create identities for partecipant
composer identity --help
View list of identity
composer identity list -c admin@name_app

Read more


Solidity is a contract-oriented, high-level language for implementing smart contracts. It's a statically typed language similar to object oriented languages. It was influenced by C++, Python and JavaScript and is designed to insert the smart contracts into Ethereum Virtual Machine (EVM) and executed on the Ethereum blockchain network. The best way to try the Solidity language is Remix a web browser storage at the following link https://remix.ethereum.org. Into the Remix application you can compile, run and debug your solidity program before you insert it into ethereum network.
A Solidity program its composed by the following element:
pragma solidity ^0.4.7 where this is the version of compiler
contract name
state/storage variables

Its possible insert multiply contracts in a program and a contracts can be invocated, inherited, created or imported
import A.sol
contract B is A {
the variables can be boolean, number integer int & unsigned integer uint. it can be from int8, int16, uint32 bytes
there are some predefinited function that return a value
- balance --> address.balance return a balance of an address wallet in wei
- address.transfer(10) or address.send(20) this function transfer respectivity 10 and 20 wei
In Solidity for check the value of a variables there arent null or undefined how in javascript but is possible use other method. Below any examples.

Verify if variable is of type address
address me;
bool name;
name = (me == address(0x0));
if me = address then name = True
Verify if array is empty
bool arrayempty;
uint16[] myArray;
arrayempty = (myArray.lenght == 0);
if myarray is empty then arrayempty is True
type conversions

explicit potential loss data if not correct inserted

persistent(database) key/value it is store. When read or write are costly. Contract can manage only its own for securety.
In storage are saved state variables, local variables, argoments in function if defined.
function A(c uint storage, d int, e bool storage)
c and e are saved in storage
is temporary data used in function
there are input arguments in function, array, variable create by reference
example :
int number; storage
function A(number) {
num = number
number is defined in storage num create a copy in memory of number
array can be static or dynamic
- special type bytes or string
function A () returns(define return named returned automatic)
return variables
input variables (type names )
return tuple - different type in tuple
tuple defined with var is a list of objects for contain the results of function
var(a, b, c)=getTuple(); its possible not insert a value ex. var(a,b, )=getTuple();
function overloading --> function but call with different number of input variable
MAPPING mapping type is hashtable like structure, it is allocated only in storage or state variable. Key/value structure

mapping(key => value) name mapping(address => uint8) balance;
this mapping of name balance for example contains a sequence of address 0x0000.... => balance value
key all type except mapping
value type can be mapping. ex. mapping(address => sal ) test -->how sal is mapping(add => num) sal
key can be cryptografied with keccak256(key data)
not iterable access is permissed only with key
no lenght function is defined

define custom types with set of values not inserted in ABI defination

enums city {
define an object

struct myStruc {
address myadd;
bytes description;
uint saldo;
instance struct
myStruc mystruttura
can be contained into arrays and mapping
istruc = myStruc(myadd, descrption,0);
copies to storage - local reference --> updates in storage

Abstract Contract there are no keyword for abstract contracts
the functions declared but no body provided -- view the example below:

function a(uint b){

the multiple inheritance permit import most contract but only a single contract it is created
- view the example below:
inheritance contract
import a.sol
import b.sol
contract c is a,b {
if function are inserted into contract imported(a or b) its must be declared in new contract created.
public private | internal external(Does not apply to storage variables) | Functions can be called from other contracts
public default for function
public Function calls: Internal | External

int public aa;
function a() external {} --> can be calls by other contract
function b() internal {} --> error -> can't be executed in other contract
private (available within the contract only) not available into derived contracs
uint8 private bb;
function a() private {
//available only for this contract
- Default for storage variables
- Not in ABI
- function -> invoked only within contract
- function/variables valid in derived contract
- NOT applicable to storage variables
- Function cannot be invoked from within the contract how normal function but need to use keyword this

function a() external {} -> from other contract
function b() {}
Function type Variables
- can be assigned to other function, received how parameter, returned from function
- function a() options {}
- options are : internal | external - [constant] - [payable] - returns (....)

Ether units
The Ethereum cryptocurrency composed by different unit of measure and can be converted. the unit are wei(default measurement unit) ether finney szabo ecc.. go to ethereum project for view http://ether.fund/tool/converter
now • Returns block time in seconds (from 1970)
• Conversion by suffixing literal with the time units. Its possible convert time in seconds, minutes, hours, days, weeks and years.
the function gime us current block information how number, coinbase, timestamp, difficulty, gaslimit
tx - Transaction
tx.gasprice -> gas price for the transaction if payable
tx.origin -> do not use - its preferible use msg.sender

the function msg return us information of the message
msg.data -> call data i bytes
msg.sender -> the address of caller(0x0...)
msg.value -> if function has option payable return number of wei sent i message

throw() do not use DEPRECATED recommended Use require() or revert()
• Aborts the transaction execution
• All state changes are reverted
• No ethers are sent out
• Ethers received in transaction returned
• Gas is spent then gas price is not refund
• Transaction is recorded on the chain; nonce is valid & recorded
• Behaves like throw;
• throw; uses up all of the gas
• revert() refunds the unused gas
• throws if condition is not true
• Like assert it throws an exception if condition its true
Constant Variable
• data defined at compile time
Constant Functions
• Constant functions Promises not to change state of the contract
Fallback Function
• An un-named function in the contract
• Invoked without the data
• Restrictions:
• No arguments • Cannot return anything • Maximum gas spend = 2300 gas
• Changes the behavior of a function

modifier ownerOnly {
if(owner == msg.sender ) {
} else {
function transfer() ownerOnly {} - apply modifier to function now only the caller execute the function
Modifiers can take arguments
local Variables from within modifiers NOT available in functions
Applying Modifiers
• Multiple modifiers may be applied to functions the order is important
• Inheritable & may be overridden by child contract

Logs & Events Contract State Changes when sendTransaction, logs are update, event are enitted and the Dapp(Decentrilized application) watching for events.
Log can be accessed for checks and research
• Events are part of ABI definition
• Event arguments are stored in the logs
• Logs can be read using topic filters
• Event arguments marked as indexed can be used as criteria/filter
• Event is declared like a function without body


event NewEvent(address indexed who, string name, uint importo); max 3 args

function a() payable b {
if (msg.sender == owner) {
NewEvent(msg.sender, b, msg.value)

contract lifecycle, contract compiled create an Abidefination that come invoked for execution and the program can insert in the contract a self-destruct option

function killContract() OwnerOnly {
Suicide(myconto) --> the balance send to this address

Transactions will fail, if contract was killed, the fund’s sent to a self destructed contract will be LOST. To prevent fund loss you can remove all references to dead contracts or call get before send to ensure that contract is not dead


insert new data into array address automatic to mapping
if i do address.lenght i have the lenght of mapping
mapping(address => bytes32) addressMAp;
address[] addressess;

- sender - receiver
address.transfer() send can fail
1. Payable fallback function runs out of gas
2. Payable function runs out of gas
3. Payable function throws exception
• Returns false on failure but it does not HALT contract execution
• Throws exception on failure; HALTs the execution

Read more
Machine learning
Deep Learning
Convolutional Neural Network - CNN
Here you can test the power of Artificial Intelligence
Insert an image and the computer tell you the category identified from the image that it is compared with a model generated from CNN (MXNET for Javascript) into your browser serverless.
The Convolutional Neural Network is a tools of Deep Learning and it's used for classification of the object inserted into the image. La Convolution Neural Network is composed of the followings phases: Convolution, Pooling, Flattening and Full Connection.
The first, the convolution, is used for minimize the dimension of the image and for higher the process.
The matemathical formula is the following: formula convolution
At first, you take the matrix of input image and some matrix of 3x3 (5x5 or 7x7 with other methods). This matrix is called Feature Detector or Kernel and is used for reduce dimension of image in bytes. For example in the below image, we extract into the input image matrix a mini matrix of 3x3 element, which we multiply with the element of the Feature Detector matrix. Then, in this new matrix, we must sum the its values and put the result into the relative cell of the Feature Map. The matrixs of 3x3 must be extrapolated from input matrix from left to right and then from high to low. create Feature map
Run this process for any the Feature Map to create. I remember you that it must be numerous to have a lot of layer inserted into the Neural Network. So, there are some different Feature Detector used for remove some features at the insert image. For example, I can insert a Feature Detector for remove the color. In the image below there is the scheme of Features Maps create.
Feature map
Feature map
To have a non Linear representation, we must use an activation function for the Deep Learning. Most function used is the Rectfier, which is called ReLU Rectified Linear Unit. Obviously, There are other activation function, as the Sigmoid Function, or variants of the ReLU, as the Noisy ReLUs or the Leaky ReLUs.
Now we analizing the Pooling phase, which takes the Features Maps just created and, by the Max Pooling Method, we creating the Pooled Features Maps.
This method extracts into the Feature Map some matrix 2x2 and, from theese is taken the max value, inserting then it into the Pooled Featured Map.
Into the below scheme, you can see in different colors the extract matrixs and the relative results.

In the Pooling phase we obtain the Pooled Features Map, in which we reducing yet data dimentions that are representative of image.
Then we have the Flattening phase, in which for every we create a vector where we insert the values present into the map. This vectors are the input layer.
Feature map
The last phase is the Full Connection where the vectors are inserted into the our Neural Network with the hidden layer.
The Neural Network return us the n output(y1, y2, yn) and, throught the Softmax Function, the y output with the greater probabily will be the final result.
In the following image, there is the scheme of the process just described and the Softmax Function formula.
schema cnn
Read more
Insert image:
Puoi provare la potenza dell'Intelligenza Artificiale
Inserisci una immagine ed il sistema ti dirà a quale categoria appartiene l'immagine la quale viene confrontata con un modello predisposto con la CNN (MXNET for Javascript) nel tuo browser senza nessun collegamento con un server.
La Convolutional Neural Network è un metodo che viene utilizzato principalmente per classificare gli oggetti o le lettere inserite nelle immagini. La Convolution Neural Network è composta da diverse fasi : Convolution, Pooling, Flattening and Full Connection.
La prima fase la convolution si usa per ridurre le dimensioni delle immagini per occupare minor spazio e rendere piu veloce il procedimento.
La formula matematica è la seguente: formula convolution
Si prende la matrice che rappresenta l'immagine e si prendono alcune matrici 3x3 (nel nostro caso mentre altri prendono matrici anche 5x5 o 7x7). Queste matrici si chiamono Feature Detector o Kernel e si usano per ridurre le dimensioni delle immagini. Come nell'immagine sottostante, si prende all'interno della matrice input immagine una mini matrice 3x3 e la si moltiplica per la matrice 3x3 Feature Detector. Poi nella matrice così creata si fa la somma dei valori ed il risultato si inserisce come elemento nella relativa posizione nella Feature Map. Le matrici 3x3 vanno estrapolate dalla input image da sinistra a destra e poi dall'alto in basso. create Feature map
Questo procedimento si ripete per ogni Feature Map da creare, ricordo che devono essere molte per avere diversi layer da inserire nel Neural Network. Quindi devo avere diverse Feature Detector ognuna delle quali si usa per eliminare delle caratteristiche alle immagini. Ad esempio, posso utilizzare un Feature Detector per eliminare il colore. Sotto mostro lo schema delle Features Maps create di cui poi estrapoleremo i nostri layer.
Feature map
Feature map
Per passare ad un ambiente non lineare, bisogna utilizzare una activation function nel Deep Learning. La più indicata è la Rectfier chiamata ReLU Rectified Linear Unit. Naturalmente ci sono altre activation function, come la Sigmoid Function, o varianti della stessa ReLU, come la Noisy ReLUs o la Leaky ReLUs.
Ora analiziamo la fase di Pooling, in cui si prendono le Features Maps appena create e, tramite il metodo Max Pooling, creiamo le Pooled Features Map.
Il metodo consiste nell'estrapolare nelle Feature Map delle matrici 2x2 e, da quest'ultime, si prende il valore massimo e lo si inserisce nella Pooled Featured Map.
Nello schema sottostante, si possono notare in diversi colori le diverse matrici acquisite ed i relativi risultati.
Nella fase di Pooling otteniamo la Pooled Featured Map, in cui riduciamo ancora le dimensioni dei dati che rappresentano l'immagine.
La fase successiva è la Flattening in cui per ogni Pooled Features Map si crea un vettore in cui inserire i valori presenti nella map. I vettori creati saranno gli input layer.
Feature map
L'ultima fase è la full connection in cui i vettori creati in precedenza vengono inseriti nel nostro Neural Network insieme agli hidden layer.
Il Neural Network ci restituirà n output(y1, y2, yn) e, tramite la funzione Softmax, l'output y con la maggiore probabilità sarà il nostro risultato finale.
Nell'immagine seguente, c'è lo schema del procedimento appena descritto e la formula della funzione Softmax.
schema cnn
Read more
Inserisci immagine:

Loading model.... please wait ...
Machine learning
Deep Learning
Convolutional Neural Network - CNN

Insert image:
immagine inserita
immagine ricreata
Machine learning
Ora analizziamo un altro strumento della Intelligenza Artificiale, la ChatBots, programma creato tramite machine learning che permette di far comunicare gli umani con un computer.
Se vuoi provare la ChatBots inserisci una domanda con le seguenti keyword (django, python, javascript, pm2, nodejs, uwsgi, hello, hi)
Now, we showing you another tool of Artificial Intelligence, the ChatBots, which is a program create by machine learning for chat humans with an algorithm.
If you would testing the ChatBots you can ask a question inserting, for example, the keyword as django, python, javascript, pm2, nodejs, uwsgi, hello or hi
Mecbar blog
The Bank Marketing project with the Machine Learning Logistic Regression
We create a model for predict if a client will subscribe a product of the bank the Term Deposit with a Direct Marketing offert.
The same project will create with Gretl, Python and Deep Learning in Python.
We use for create the model the Backward elimination method and the CAP for evaluate it.
We use the regression but is not possible use the Multiple Linear Regression because the result of the function is a discrete variable(0, 1) then we use the Logistic Regression.
The Logistic Regression is a regression model to determine the probability of a dependent variable that is categorical(Yes or No) or binary how in our case(0,1).
This is the formula : y(probability - dependent variable) = σ(β 0 + β 1x 1 + β 2x 2 + ... + β nx n)
dove σ è la Sigmoid Function (image below), β 0 .... β n are the coefficients and x 1 .. x n are the independent variables.
The Sigmoid Function is function that return us result 0 or 1 according a discriminating value that is generally 0.5 how in the image below :
logistic regression
The Sigmoid Function formula:

logistic regression
To create a model minimizing the cost function (a cost function is a measure of how wrong the model is in terms of its ability to estimate the relationship between x and y) to find the best coefficients with Gradient Descent (Gradient descent is an efficient optimization algorithm that attempts to find a minimum of a function)

Read more
The Backpropagation with Gradient Descent

Backpropagation is a backward propagation of errors and is a powerful tool of the deep learning. With the Gradient Descent the backpropagation reduce the cost function and the time of execution. We now talk about of calculate the Gradient Descent.
the gradient graph
Read more

The virtual reality

images virtual box

In this post we analyze the virtual reality and specifically of the Web Framework A-FRAME. We use A-FRAME for inserting image into web pages. To do this we can use resources already created or create new with special library. In below window there is an example.

Read more
Now, the library available
insert into html page the script :


Then you must insert the others library for create new images or for use old images :

https://unpkg.com/aframe-animation-component@3.2.1/dist/aframe-animation-component.min.js https://unpkg.com/aframe-particle-system-component@1.0.x/dist/aframe-particle-system-component.min.js https://unpkg.com/aframe-gradient-sky@1.0.4/dist/gradientsky.min.js https://unpkg.com/aframe-extras.ocean@%5E3.5.x/dist/aframe-extras.ocean.js

If you want use images already available, you must insert the following code : comandi vr We can also include 360 degree photos. Below there is an example: Furthermore, with the javascript library Three.Js we can create any object. The frame below, for example, is been created with Three.js and it moves with the scroll of the mouse. Click on the following button for change the images into the frame.
Now, we see how create static or animated images with Three.js. First of all, we have need to define into an script the objects SCENE, CAMERA and RENDERER.
The SCENE object identifies the space where will be insert the others objects, while the object CAMERA defines how to view the images and the object RENDERER defines how publicat the images created. Naturally, for every object, there will be others parameter to insert how we'll see in the later examples. Others objects to define are GEOMETRY (as box and spheres or the geometry figures), MATERIAL (complement object as, for example , color parameter) and MESH. This MESH takes object GEOMETRY and apply to it the object MATERIAL. Then we must take the object created with MESH and we catch it with SCENE with the command Camera.position.z = 5 attention to assign a different position at the objects, otherwise everyone has the coordinate (0,0,0). Now we must do the RENDERER for viewing into the video the object created or, if we need an animation, we can insert it simultaneously. Animate for create an animation and RENDERER. RENDER(scene, camera) for view created object without animation. In the image below we see the html code used. javascript command
This is the result of the same code in javascript.

Create a virtual machine with Virtual Box

images virtual box

Now we talk about virtual box. First of all we download the installer file from www.virtualbox.org relative of the version of operating system. Download also the Virtual Box Extension Pack.
Install the Virtual Box previosly downloaded. Now start the Virtual Box and download the virtual machine file ready for the V.M. for example file with extension .ova

virtual box
Into the microsoft site at the link highlighted in the image below select the system that you want virtualize and the Virtual Box platform. vm microsoft

Download the zip file, extract the file .ova and with a double click launch the automatic installation of the V.M. In the example below we create a V.M. with Windows7 and Internet Explorer 8.

Select import and confirm for terminate the installation. vm installazione w7 microsoft When the installation terminated the V.M. will open and you will choose the net(you could always change the parameter). vm ova install windows seven

In the image below the screenshot of the V.M. installed..

Remember to take note of the user and password that you'll have to enter every time you will open the V.M.

vm windows 7 home page

Now in the start button of the V.M. you launch the restart and wait the install of any updates.

vm w seven IE8

The image of Windows7 in valdate for 90 days but you can save the image with the snapshot function. That function is helpful for create a copy before take the updates for eventually restore the V.M.

vm microsoft

For customize your V.M. modify the setup parameterin the setting menu. For use USB port or setup parameter of the Net it's necessary install the Extension Pack previosly downloaded.

vm impostazioni

If you want you can install more V.M. for example a Linux machine.

vm sistema linux

Application server with Node.js, PM2 and Nginx on Linux Ubuntu 16.04 Lts

by Mecbar 13 luglio 2017

For install NodeJs from Terminal insert :
sudo apt-get install
sudo apt-get install build-essential
If our application is ready install the process manager pm2 with the command:
sudo npm install -g pm2
and then start the aplication from Terminal
pm2 start name_app.js #from the directory of the app
verify if the app is in execution in the browser insert the link localhost:5500 (port assigned into the app)

If the app is running set the automatic start of Pm2 at the ipl of the machine:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u root --hp /home/root
Now setup the Http Server Nginx hw reverse proxy opening the following file with an editor, here use Nano: seguente con l'editor nano:
sudo nano /etc/nginx/sites-available/default

insert the following text:
location / {
proxy_pass http://localhost:5500;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
if you want insert an other application add the following instruction: location / {}
when all test is written save the file with command ctrl+o and exit with ctrl+x
Now for verifycation the correct operation shutdown the machine and restart it.

pm2 status

For verification correct automatic start of the process manager pm2 that runs the app insert from Terminal the command pm2 show name_app or pm2 show number_app
We see that the app run and it's active. insert from Terminal :

sudo pm2 show 0 pm2 process attivi

We see that the app run and it's active. From the browser insert the link http://localhost:5500 we can open our app and it's work.

app pronta ad uso
Create a web server with uWSGI & Django
by Mecbar 2 settembre 2017

For execute an web application with Django you can use the web server into Python or create an application server with Uwsgi. Here we see how to create an application server with Uwsgi. First of all install Uwsgi from Terminal:

pip install uwsgi

for test correct execution of Uwsgi create a file python test.py

# test.py
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b"Hello World"] # python3
return ["Hello World"] # python2

from Terminal to launch Uwsgi sudo uwsgi --http :8000 --wsgi-file_test.py
and into browser insert http://localhost:8000

sudo uwsgi --http :8000 --wsgi-file test.py

it's the result of the command python manage.py runserver. Now install Nginx and use an Unix socket for connect to the app

sudo apt-get install nginx
then start the service
sudo /etc/init.d/nginx start
Now go in the directory of the app and create a configuration file of Nginx for the app with tha Nano editor
sudo nano app_nginx.conf
and insert

# nginx.conf
upstream django {
# connect to this socket
# server unix:///path app/name_app.sock; # for a file socket
server; # for a web port socket
# the port your site will be served on
listen 8000;
# the domain name it will serve for server_name;
# substitute your machine's IP address or FQDN charset utf-8; # max upload size client_max_body_size 75M; # adjust to taste
# Django media
location /media { alias /path_app/static/images; # your Django project's media files - amend as required } # si possono inserire nel file setting.py location /static { alias /percorso mia app/static; # your Django project's
static files - amend as required
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed

press ctrl+o for save and ctrl+x to exit. Now create the link between the file .conf created with Nginx

sudo ln -s ~/path_app/name_app_nginx.conf /etc/nginx/sites-enabled/

into file setting.py of the app insert STATIC_ROOT = os.path.join(BASE_DIR,"static/") for handle static files.
Now restart Nginx
sudo /etc/init.d/nginx restart
[ ok ] Restarting nginx (via systemctl): nginx.service.
sudo systemctl status nginx.service

● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since mer 2017-08-30 00:10:31 CEST; 4s ago
Process: 14530 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS)
Process: 14830 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 14827 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS) Main PID: 14831 (nginx)
CGroup: /system.slice/nginx.service ├─14831 nginx: master process /usr/sbin/nginx -g daemon on; master_process on ├─14832 nginx: worker process ├─14833 nginx: worker process ├─14834 nginx: worker process └─14835 nginx: worker process ago 30 00:10:31 mecbar systemd[1]: Starting A high performance web server and a reverse proxy server... ago 30 00:10:31 mecbar systemd[1]: Started A high performance web server and a reverse proxy server.

copy uwsgi.params into the project directory. Now test the socket
sudo uwsgi --socket :8001 --wsgi-file_test.py
we see the following message how response:

*** Starting uWSGI 2.0.15 (64bit) on [Wed Aug 30 00:35:30 2017] ***
compiled with version: 5.4.0 20160609 on 29 August 2017 23:10:14
os: Linux-4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017
nodename: xxxxxx
machine: x86_64
clock source: unix detected
number of CPU cores: 4
current working directory: /percorso app
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 31000
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :8001 fd 3
Python version: 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
*** Python threads support is disabled. You can enable it with --enable-threads ***

.... if we insert into the browser http://localhost:8000 we see 'hello world' . For error or problem read the log of Nginx at the link /var/log/nginx/error.log) if error 13: Permission denied or other messagge insert the following command:
sudo uwsgi --socket name_app.sock --wsgi-file_test.py --chmod-socket=666
sudo uwsgi --socket mysite.sock --wsgi-file_test.py --chmod-socket=664 # (more sensible)

Se tutto ok ci siamo e proviamo il socket con la nostra applicazione che per esempio la chiamiamo app
sudo uwsgi --socket app.sock --module app.wsgi –chmod-socket=666
N.B: se errore

502 bad gateway read the log /var/log/nginx → error.log if error 13 Permission denied folder project delete the file app.sock and try again
*** Starting uWSGI 2.0.15 (64bit) on [Wed Aug 30 19:41:56 2017] *** compiled with version: 5.4.0 20160609 on 29 August 2017 23:10:14
os: Linux-4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC ..... *** uWSGI is running in multiple interpreter mode ***

into the browser http://localhost:8000 we have our app ready and run through Django, Uwsgi e Nginx. Config uWSGI for execute it with .ini file insert the data into a file and then run it
sudo nano app_uwsgi.ini

# mysite_uwsgi.ini file
# Django-related settings
# the base directory (full path)
chdir = /path_app
# Django's wsgi file
module = app.wsgi
# the virtualenv (full path)
home = /path/virtualenv
# process-related settings
# master master = true
# maximum number of worker processes processes = 10
# the socket (use the full path to be safe
socket = /path/app.sock # ... with appropriate permissions - may be needed
# chmod-socket = 664 # chmod-socket = 666
# clear environment on exit
vacuum = true

press ctrl+o to save then ctrl+x for exit and execute created file
sudo uwsgi --ini app_uwsgi.ini # the --ini option it means that is a file

[uWSGI] getting INI configuration from app_uwsgi.ini *** Starting uWSGI 2.0.15 (64bit) on [Wed Aug 30 20:49:26 2017] *** compiled with version: 5.4.0 20160609 on 29 August 2017 23:10:14 os: Linux-4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017...........

For the end of configuration set Emperor methd for handle more application on the same server (uid user id – gid group id) - create the directory uwsgi into the folder /etc
sudo mkdir /etc/uwsgi
sudo mkdir /etc/uwsgi/vassals

than a file ini with folder vassals
sudo ln -s /path/app_uwsgi.ini /etc/uwsgi/vassals/
sudo uwsgi --emperor /etc/uwsgi/vassals --uid www-data --gid www-data

*** Starting uWSGI 2.0.15 (64bit) on [Wed Aug 30 20:57:24 2017] *** compiled with version: 5.4.0 20160609 on 29 August 2017 23:10:14 os: Linux-4.10.0-33-generic #37~16.04.1-Ubuntu SMP Fri Aug 11 14:07:24 UTC 2017 nodename: xxxxxx machine: x86_64 clock source: unix detected number of CPU cores: 4 current working directory: /path_app detected binary path: /usr/local/bin/uwsgi !!! no internal routing support, rebuild with pcre support !!! setgid() to 33 setuid() to 33 *** WARNING: you are running uWSGI without its master process manager *** your processes number limit is 31000 your memory page size is 4096 bytes detected max file descriptor number: 1024 *** starting uWSGI Emperor *** *** has_emperor mode detected (fd: 6) *** [uWSGI] getting INI configuration from app_uwsgi.ini *** .... - [emperor] vassal app_uwsgi.ini is now loyal.

Into the browser app is ready.
Now set the automatic start of the service Uwsgi with the systemctl of linux.
Create the file uwsgi.service
sudo nano /etc/systemd/system/uwsgi.service
and insert the following data

Description=uWSGI Emperor
ExecStart=/usr/local/bin/sudo uwsgi --emperor /etc/uwsgi/vassals/ --uid www-data --gid www-data --daemonize /var/log/uwsgi-emperor.log
# Requires systemd version 211 or newer
Type=notify StandardError=syslog

ctrl+o for save and crl+x for exit. Start the service and test if it is active
sudo systemctl start uwsgi.service
sudo systemctl status uwsgi.service

uwsgi service status
For test if service is automatic active at every restart run the system restart and just the system is ready check the status of service. if it's active we finished. That's all
View Nginx PM2 and firewall of Linux
by Mecbar 13 settembre 2017

In the previous post we have used Nginx and PM2. Now we see how use them on Linux Ubuntu 16.04 Lts. Nginx is a web server/reverse proxy with elevate performans and can be used also as server proxy for email on different operating system. First of all install it with the command from Terminal

sudo apt-get install nginx
then setup the automatic start of the service
sudo /etc/init.d/nginx start
[ ok ] Starting nginx (via systemctl): nginx.service.
sudo systemctl status nginx.service staus nginx service other command are service stop or restart sudo systemctl stop nginx.service sudo systemctl restart nginx.service In the following scheme we see how Nginxinteracts with the other application and the web. From the browser the user call Nginx that transfer the request at Uwsgi/Django (back-end). Django create the answer and through Uwsgi send it via Nginx at browser for the user.
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
In the image below the scheme with the web server NodeJs(Javascript) and the process manager PM2.
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
internet utenti
For communicate with the operating system we need to open the firewall policy so Nginx can transfer request and answer between client and server.
Some commands for Firewall of Linux are the followings:
  • sudo ufw status
  • sudo ufw enable # enable the firewall
  • sudo ufw disable # disable the firewall
  • sudo ufw allow port 80 # enable port for ex. the port 80
  • sudo ufw allow 'Nginx http' # enable nginx for protocol http

In the example below we see the open port from Firewall. firewall In the previous post we see how use Nginx with Uwsgi and Django now instead we talk about PM2 for NodeJs. PM2 is a process manager. It allows us to handle more web application on one server. With PM2 the application are always ready on the server for the request of the user In the previous post we have seen how to install and use it with Nginx. Now we show some commands for PM2.
  • sudo systemctl status pm2 #for show status of PM2
  • sudo systemctl stop pm2 # for stop run of PM2
  • sudo systemctl start pm2
  • sudo systemctl restart pm2
Below the command for see status of web application handles by Pm2.
  • sudo pm2 show n.ro application or name of application pm2 show app
  • sudo pm2 monit
  • sudo pm2 list # application list handle by PM2
  • sudo pm2 start name_app # start the app
  • sudo pm2 restart name_app # restart the app
  • sudo pm2 stop name_app # stop run app
Development case history
Linux Ubuntu
Contact us
Here the ChatBot by Mecbar
Call Bot