2020年12月27日 星期日

Testing the tests: What should we do when the test case self is buggy?

Nowadays, software testing is widely using to ensure quality and prevent bug, like unit test/AB test…. etc, But increasingly test case will causing some unwanted problem like the test case is buggy so the testing will failed. And causing developer waste time chasing down problems that potentially didn’t really exist

Facebook provide a method to detect the test case bug:

1. Using ML technology  to predict what test case to run 

2. All end-to-end tests will have some degree of flakiness, So make a index about how reliable in these test case

3. Assert that a test is sufficiently reliable and provide a scale to illustrate which tests are less reliable than they should be. 

Software testing is a good tool to ensure the product quality, with more complicated code base and more complicated test case, it need carefully consider introduce method that test the test case to prevent waste time and increase testing quality.


2020年11月13日 星期五

Performance tuning- Leveraging modern CPU branch predict mechanism.

In modern CPU the branch predictor is complicated and the deeper pipeline causing the cost of miss branch prediction higher.

How can we leveraging the modern CPU branch predictor, In [1] example code, the branch miss rate can  be reduce by just sorting input data before exciting original algorithm. 



Conclusion 

1. Adding pattern to your algorithm(sorted data).

2. Using likely()/unlikely() marco to help branch prediction more accuracy.


Reference:

3. Linux likely() unlikely() MARCO.

2020年6月22日 星期一

Machine Learning Foundations NLP(2): Using the Sequencing APIs

Using the Sequencing APIs

Sequencing is use to format sentence array using token,
For example:

"I have a car"
"Car have a door"
 
Tokenize: [I:1] [have:2] [a:3] [car:4] [door:5]
 
then these two sentence can be represent to:
[1 2 3 4] 
[4 2 3 5]

Sequencing is useful to represent sentence data and take as input for a neuron network.

Code

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Sentences data
sentences = {
'I have a car',
'I have a pen',
'I have a bike',
'He have a apple and a banana'
}
## Make a tokenizer with max 100 tokens, and label stranger token with OOV index
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
# Tokenize the sentence data
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
# Print the word index tokenized from sentence data
# The more often a word be use the lower index
word_index = tokenizer.word_index
print(word_index)
## Make to sentence array
sequences = tokenizer.texts_to_sequences(sentences)
## Make all sequence array length to 5, it will helpful when we train a NLP neuron network
padded = pad_sequences(sequences, maxlen=5)
print("\nWord Index = " , word_index)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)
# Try with words that the tokenizer wasn't fit to
# manatee is unseen token so it will be replace to OOV:1
test_data = [
'I have a big car',
'He like banana'
]
test_seq = tokenizer.texts_to_sequences(test_data)
print("\nTest Sequence = ", test_seq)
view raw gistfile1.txt hosted with ❤ by GitHub

Result:

{'<OOV>': 1, 'a': 2, 'have': 3, 'i': 4, 'car': 5, 'pen': 6, 'bike': 7, 'he': 8, 'apple': 9, 'and': 10, 'banna': 11}
Word Index = {'<OOV>': 1, 'a': 2, 'have': 3, 'i': 4, 'car': 5, 'pen': 6, 'bike': 7, 'he': 8, 'apple': 9, 'and': 10, 'banna': 11}
Sequences = [[4, 3, 2, 5], [4, 3, 2, 6], [4, 3, 2, 7], [8, 3, 2, 9, 10, 2, 11]]
Padded Sequences:
[[ 0 4 3 2 5]
[ 0 4 3 2 6]
[ 0 4 3 2 7]
[ 2 9 10 2 11]]
Test Sequence = [[4, 3, 2, 1, 5], [8, 1, 11]]
view raw gistfile1.txt hosted with ❤ by GitHub
Reference:






2020年6月17日 星期三

Machine Learning Foundations NLP(1): Tokenization for Natural Language Processing

Tokenization for Natural Language Processing


Tokenize is mean to break down a sentence to server work, for example:

I have a car. -> I / have / a/ car

Tensorflow provide a tokenize tool:Tokenizer, it can easily to use for tokenize input sentence.


Code:


import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
# Sentences data
sentences = {
'I have a car',
'I have a pen',
'I have a bike',
'He have a apple'
}
# Create an tokenizer with maxium 10 words
tokenizer = Tokenizer(num_words = 10)
# Tokenize the sentence data
tokenizer.fit_on_texts(sentences)
# Print the word index tokenized from sentence data
# The more often a word be use the lower index will be assign
word_index = tokenizer.word_index
print(word_index)
view raw gistfile1.txt hosted with ❤ by GitHub

Result:

{'have': 1, 'a': 2, 'i': 3, 'he': 4, 'apple': 5, 'bike': 6, 'pen': 7, 'car': 8}



Reference:

Machine Learning Foundations: Ep #8 - Tokenization for Natural Language Processing



2020年5月28日 星期四

Linux kernel module: Add entry to debugfs and a read/write file.

Goal

This module will create Ray_DBG directory in debugfs and create a file REG for read/write. it's similar to previous module(create /proc/Ray), but easier to use and need less line of code.
 

Code


#include <linux/module.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/debugfs.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ray Tsai");
static u32 reg_data = 0xaa;
static struct dentry *root;
static int __init test_module_init(void)
{
pr_info("Init Ray test module!\n");
pr_info("Create Ray_DBG !\n");
root = debugfs_create_dir("Ray_DBG", NULL);
if(!root)
return -EFAULT;
pr_info("Create REG debug !\n");
if(!debugfs_create_x32("REG", 0777, root, (u32 *)&reg_data))
goto err;
return 0;
err:
debugfs_remove_recursive(root);
pr_info("Failed to initialize debugfs\n");
return -EFAULT;
}
static void __exit test_module_exit(void)
{
debugfs_remove_recursive(root);
pr_info("Exit Ray test module!\n");
}
module_init(test_module_init);
module_exit(test_module_exit);
view raw gistfile1.txt hosted with ❤ by GitHub

Makefile


obj-m := test_module.o
KERNELDIR ?= /lib/modules/$(shell uname -r)/build
all default: modules
install: modules_install
modules modules_install help clean:
$(MAKE) -C $(KERNELDIR) M=$(shell pwd) $@
view raw gistfile1.txt hosted with ❤ by GitHub

Usage

1. Insert module 
$>insmod test_module.ko
$>dmesg | tail
[7501255.251763] Create Ray_DBG !
[7501255.251776] Create REG debug !
2. Mount debugfs 
$>mount -t debugfs none /sys/kernel/debug
3. Read data
$>cat /sys/kernel/debug/Ray_DBG/REG
$>0x000000aa
4. Write data 
$>echo 0xff>  /sys/kernel/debug/Ray_DBG/REG 
$>cat /sys/kernel/debug/Ray_DBG/REG 
$>0x000000ff

Reference

  1. Kernel document: debugfs
  2. Debugfs

Linux kernel module: Add entry in /proc and passing args while insmod

Goal 

Implement a kernel module that can passing parameter when insert module, and add an entry to /proc that can write/read data. In embedded linux often using similar facility to help debug.

Code:

#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/proc_fs.h>
#include <linux/uaccess.h>
#define BUFSIZE 100
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ray Tsai");
static int mode=0;
module_param(mode,int,0660);
static char *entry="Default";
module_param(entry,charp,0660);
// Proc related structure
static struct proc_dir_entry *ent;
static int private_test_data = 123;
// Write handler
static ssize_t test_write(struct file *file, const char __user *ubuf,size_t count, loff_t *ppos)
{
char buf[BUFSIZE];
int temp = 0;
int num =0;
pr_info( KERN_DEBUG "write handler\n");
if(copy_from_user(buf, ubuf, count))
{
pr_info( KERN_DEBUG "Error copy from user \n");
return -EFAULT;
}
num = sscanf(buf,"%d",&temp);
private_test_data = temp;
return count;
}
// Read handler
static ssize_t test_read(struct file *file, char __user *ubuf,size_t count, loff_t *ppos)
{
char buf[BUFSIZE];
int len=0;
pr_info( KERN_DEBUG "read handler\n");
if(*ppos > 0 || count < BUFSIZE)
return 0;
len += sprintf(buf,"private_test_data = %d\n",private_test_data);
if(copy_to_user(ubuf,buf,len))
return -EFAULT;
*ppos = len;
return len;
}
static struct file_operations myops =
{
.owner = THIS_MODULE,
.read = test_read,
.write = test_write,
};
static int __init test_module_init(void)
{
pr_info("Init Ray test module!\n");
pr_info("Test module: %d!\n", mode);
pr_info("Create %s \n", entry);
ent = proc_create(entry,0660,NULL,&myops);
return 0;
}
static void __exit test_module_exit(void)
{
proc_remove(ent);
pr_info("Exit Ray test module!\n");
}
module_init(test_module_init);
module_exit(test_module_exit);
view raw gistfile1.txt hosted with ❤ by GitHub

Makefile:


obj-m := test_module.o
KERNELDIR ?= /lib/modules/$(shell uname -r)/build
all default: modules
install: modules_install
modules modules_install help clean:
$(MAKE) -C $(KERNELDIR) M=$(shell pwd) $@
view raw gistfile1.txt hosted with ❤ by GitHub

Usage:

1. Insert module and make /proc/Ray entry
 #>sudo insmod test_module.ko entry="Ray" mode=1238
 
2. Write data 
#>echo 123 > /proc/Ray

3. Read data
 #>cat /proc/Ray

2020年5月27日 星期三

DeepSpeech: Speech to text AI model.

  • Overview

"DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu’s Deep Speech research paper."
DeepSpeech provide lots of lauange api support, Python Javascript, c, and it's easily use to involve in application

  • Install DeepSpeech

Follow user guide instruction.

# Create and activate a virtualenv
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
source $HOME/tmp/deepspeech-venv/bin/activate
# Install DeepSpeech
pip3 install deepspeech
# Download pre-trained English model files
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.pbmm
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/deepspeech-0.7.0-models.scorer
# Download example audio files
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.7.0/audio-0.7.0.tar.gz
tar xvf audio-0.7.0.tar.gz
view raw gistfile1.txt hosted with ❤ by GitHub

  • Demo

Using command line tool to inference sound data.

$> deepspeech --model deepspeech-0.7.0-models.pbmm --audio audio/2830-3980-0043.wav

Output:
Loading model from file deepspeech-0.7.0-models.pbmm
TensorFlow: v1.15.0-24-gceb46aa
DeepSpeech: v0.7.1-0-g2e9c281
Loaded model in 0.0093s.
Loading scorer from files deepspeech-0.7.0-models.scorer
Loaded scorer in 0.00023s.
Running inference.
experience proves this
Inference took 1.480s for 1.975s audio file.
 
The red color string is inference text data of input sound data.

Using Python API to inference sound data.
import wave
import numpy as np
data = "audio/8455-210777-0068.wav" # your power is sufficient i said sound data
# using wave lib to read wav file
wf = wave.open(data, 'rb')
frames = wf.getnframes()
pcm_data = wf.readframes(frames)
wf.close()
# transfer audio data to int16 type
audio = np.frombuffer(pcm_data, dtype=np.int16)
from deepspeech import Model
# load pre-trained model
ds = Model("./deepspeech-0.7.0-models.pbmm")
# do inference
output = ds.stt(audio)
# print inference text data
print(output)
view raw gistfile1.txt hosted with ❤ by GitHub

Output:
your power is sufficient i said sound data


Reference:

2020年5月22日 星期五

Machine Learning Foundations: Exercise 4 Happy and sad image classify model with 99.9% accuracy .

Machine Learning Foundations: Exercise 4 Happy and sad image classify model with 99.9% accuracy:code lab link

Build a mode to classify happy and sad model with convolution neuron network. with more 99.9% accuracy.
 
Note:
  •  Model over-fitting: Near 100% accuracy on training data, but lower accuracy on testing data.
  •  ImageDataGenerator:  Label data automatic using directory name.

Code: 

import tensorflow as tf
import os
import zipfile
DESIRED_ACCURACY = 0.999
# Get the happy-or-sad image data
!wget --no-check-certificate \
"https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip" \
-O "/tmp/happy-or-sad.zip"
zip_ref = zipfile.ZipFile("/tmp/happy-or-sad.zip", 'r')
zip_ref.extractall("/tmp/h-or-s")
zip_ref.close()
# Callback function to check model accuracy
class RayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>DESIRED_ACCURACY):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = RayCallback()
# 5 layer neuron network
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu',
input_shape=(300, 300, 3)), # Fit ImageDataGenerator output size
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
# Label image data base on it's directory name
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(1./255)
train_generator = train_datagen.flow_from_directory(
'/tmp/h-or-s', # Data directory
target_size=(300, 300), # Output size
batch_size=128, # Size of the batches of data, each batch is 8 data since the training data are 80 images
class_mode='binary' # Two class: Happy, Bad
)
# Train the model
history = model.fit(train_generator,
steps_per_epoch = 8, # Load 8 data each epoch to parallize the training stage
epochs=15,
callbacks=callbacks
)
view raw gistfile1.txt hosted with ❤ by GitHub

2020年5月20日 星期三

Conference note: Making C Less Dangerous in the Linux kernel - Kees Cook


In this conference discuss below topic about unsafely C usage, and how Linux kernel has remove or add  facility to detect such condition.

1. Variable Length array
  • Using compiler option to Detect VLA: gcc -W vla
  • Using guard page tor prevent stack overflow. VLA is needed lots of instruction compared to the fixed-size array.
2. Switch case break or non-break
  • Mark all non-breaks with a “fall through” to whether programmer intent to fall through or it's a bug.
  • Compiler support this feature: -Wimplicit-fallthrough
3. Arithmetic overflow detection
  • Using compiler option to detection overflow in compile time
  • Support different warning label: ignore or take as warring
4. Compare different API for string copy
  • The safer string copy function: strscpy().
5. Safe stack - Shadow stack
  • Separate the local variable stack and return address stack
  • Support by hardware:
    • ARM pointer authentication (Sign the return address for distinguish between a local variable and return address 

Reference :

2020年5月18日 星期一

Conference note: Safety vs Security: A Tale of Two Updates - Jeremy Rosen, Smile.fr

Safety and Security are different aspects of a system, briefly discuss the differences between them and challenge to build a system with both concerns.

  • Safety: Proof and make it simple for reliable

    1. Completely define spec.
    2. Proven: Check every state of the system is meet the design.
    3. Change as less as possible: If a bug can solve by period reboot machine, just period reboot machine rather than update the software.

  • Security: Prevent system been using in unwanted way

  1. Fasting iteration: Change the old encryption algorithm to a new one.
  2. Preventive: Need update the spec to face the new challenge.

These two aspects often neglect in embedded system nowadays, especially in customer market. but with more and more device connected to the internet and responsible for critical tasks like healthcare. Designer need to take care of these two fields, Include them in the beginning of designing a system and try to make an optimized combination of them to meet the system standards.


2020年5月17日 星期日

Machine Learning Foundations: Exercise 3 Improve accuracy of MNIST using Convolutions.

Exercise 3: Improve accuracy of MNIST using Convolutions:code lab link

Improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D.
The filter amount will affect the accuracy and training time.

Code: 

import tensorflow as tf
# Callback function to check model accuracy
class RayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.998):
print("\nReached 99.8% accuracy so cancelling training!")
self.model.stop_training = True
# Load the MNIST handwrite digit data set
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Reshap and normalize training data and callback function
callbacks = RayCallback()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images = training_images/255.0
# Create an 5 layer model:
# Convolution 16 filter with 3 X 3 size to each Image -> Polling each Image to 1/4
# Flatten -> 128 input -> 10 output
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# Setting optimizer and loss function
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Training model untill accuracy > 99.8%
model.fit(training_images, training_labels, epochs=10, callbacks=[callbacks])
view raw gistfile1.txt hosted with ❤ by GitHub

2020年5月16日 星期六

GO: Fixed warning: go env -w GOPATH=... does not override conflicting OS environment variable

Because the GOPATH is OS level setting variable so it can't be set using go env -v cmd.
We need to use OS specify way to modify this PATH variable.


1. Using CMD to set GOPATH variable (Environment: WIN10)
    > setx GOPATH ##YOURPATH##

2. Check the GOPATH been setup 
   > go env 



Reference 
     2. GO/env_write.txt

2020年5月13日 星期三

Conference note: Linux I2C in the 21st Century - Wolfram Sang, Consultant / Renesas

This conference is intent to brief overview what't new in Linux I2C subsystem.
I found some interesting part of the I2C subsystem of the latest linux kernel:


  • The API i2c_dummy_device() for I2C device have more than one slave address.

Declare a dummy I2C deivce share same device but different slave address

  • Recommend new API to create I2C device: i2c_new_ancillary_device()

  • Dynamic address assign : In same I2C bus and dynamic detect what address should use.


I2C has been widely used in industry for decades and has some enhance feature like have multi slave address or dynamic assign address of slave in the same bus,
 bring lots of challenge for developer and Linux kernel provide some new API for more generic driver development.


Further speaking:
I3C is the next generation serial bus to replace I2C but I didn't saw much application  right now.


Reference : 
  1.  Linux I2C in the 21st Century - Wolfram Sang, Consultant / Renesas
  2.  I2C and SMBus Subsystem
  3.  I3C 



2020年5月12日 星期二

Machine Learning Foundations: Exercise 2 Handwriting digit model with 99% accuracy

Exercise 2: Handwriting digit model with 99% accuracy:code lab link
Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.

Code:
import tensorflow as tf
# Callback function to check model accuracy
class RayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
# Load the MNIST handwrite digit data set
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# Normalize training data and callback function
callbacks = RayCallback()
x_train = x_train/255.0
x_test = x_test/255.0
# Create an 3 layer model: Flatten -> 128 input -> 10 output
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
# Setting optimizer and loss function
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Training model untill accuracy > 99%
model.fit(x_train, y_train, epochs=15, callbacks=[callbacks])
# Evaluate with test data
model.evaluate(x_test, y_test)
view raw gistfile1.txt hosted with ❤ by GitHub

Result:
Epoch 1/15
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2570 - accuracy: 0.9265
Epoch 2/15
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1133 - accuracy: 0.9667
Epoch 3/15
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0778 - accuracy: 0.9765
Epoch 4/15
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0580 - accuracy: 0.9822
Epoch 5/15
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0444 - accuracy: 0.9859
Epoch 6/15
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0345 - accuracy: 0.9893
Epoch 7/15
1863/1875 [============================>.] - ETA: 0s - loss: 0.0268 - accuracy: 0.9916
Reached 99% accuracy so cancelling training!
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0271 - accuracy: 0.9916
313/313 [==============================] - 0s 1ms/step - loss: 0.0843 - accuracy: 0.9774
[0.08431357890367508, 0.977400004863739]
view raw gistfile1.txt hosted with ❤ by GitHub







Machine Learning Foundations : Exercise 1 House Price Question


Exercise 1 : House Prices Question : code lab link
Build a neural network that predicts the price of a house according to a simple formula, house pricing was as easy as a house costs 50k + 50k per bedroom, so that a 1 bedroom house costs 100k, a 2 bedroom house costs 150k etc.

Training data set
Bedroom amount [1, 2, 3, 4,]
House price [100, 150, 200, 250]

Code:
import tensorflow as tf
iport numpy as np
from tensorflow import keras
# Create an 1*1 layer neuron
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
# Setting optimizer and loss function
model.compile(optimizer='sgd', loss='mean_squared_error')
# Training data
xs = np.array([1, 2, 3, 4], dtype=int)
ys = np.array([100, 150, 200, 250], dtype=int)
# Training mode for 500 iteration
model.fit(xs, ys, epochs=500)
# Predict the output
print(model.predict([7.0]))m
view raw gistfile1.txt hosted with ❤ by GitHub

2020年5月6日 星期三

Octave: Read data from csv file.

Octave/Matlab can read CSV file and do some processing or plot the data quickly and easily.
Below are simple codes that read a CSV file and make some parse.


1. Read Sample_Data.csv file and stored as char array:
    char = strsplit(fileread ("Sample_Data.csv"));

2. Simple parse: Iterate each element and skip first 5 column data

    for i = 1:size(char,2)
        result(i,:) = char((10*i)-5:(10*i));
    end





Ref:

2020年3月11日 星期三

How to get main() function return value in Linux terminal.

Sometimes we want to get the exit value of main() function, using the "echo $? " can get the most recently program exit value.

Example program:
#include <stdio.h>
int main(void)
{
printf("Hello world\r\n");
return 0xff;
}
view raw gistfile1.txt hosted with ❤ by GitHub



Result:



Reference:
  1. UNIX Shell Programming 

Linux driver: How to enable dynamic debug at booting time for built-in driver.

 Dynamic debug is useful for debug driver, and can be enable by: 1. Mount debug fs #>mount -t debugfs none /sys/kernel/debug 2. Enable dy...