Loading...

Follow Golangbot | Golang tutorial on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Welcome to tutorial no. 4 in our Resumable file uploader series.

In the previous tutorials, we coded our resumable tus server using Go. In this tutorial we will use curl and dd commands to test the tus server.

Testing

We have the resumable file upload tus server ready but we have not tested it yet. We need a tus client to test the tus server. We will create the Go client in the upcoming tutorials. For now we will use the curl command to test the tus server.

Let's run the server first. Run the following commands in the terminal to fetch the code from github and then to run it.

go get github.com/golangbot/tusserver  
go install github.com/golangbot/tusserver  
tusserver  

After running the above commands, the server will be up and running in .

2019/03/30 18:01:41 Connection established successfully  
2019/03/30 18:01:41 TUS Server started  
2019/03/30 18:01:41 Directory created successfully  
2019/03/30 18:01:41 table create successfully  

We need a file to test the tus server. I have made a collage video of my pet and it is available at https://www.dropbox.com/s/evchz5hsuvtrvuu/mypet.mov?dl=0. Please feel free to use it :). I have downloaded the video to my ~/Downloads directory.

Let's send a post request and create a new file. We need to specify the Upload-Length of the entire file in the post request. This is nothing but the size of the file. We can use the ls command to find the size of the file

ls -al ~/Downloads/mypet.mov  

The above command returns the following output.

-rw-rw-r-- 1 naveen naveen 11743398 Mar 31 11:11 /home/naveen/Downloads/mypet.mov

11743398 is the size of the file. Now that we know the Upload-Length, let's create the file by sending a post request.

curl --request POST  localhost:8080/files --header "Upload-Length: 11743398" -i  

The above command creates the file. The -i argument in the end is used to display the response headers. The above command will return the following result.

HTTP/1.1 201 Created  
Location: localhost:8080/files/1  
Date: Sun, 31 Mar 2019 07:47:33 GMT  
Content-Length: 0  

The file creation has been created successfully.

Now comes the tricky part. How do we test the tus server by simulating a network disconnection? If we send a patch request to the file url using curl, the request will be completed immediately since the server is running locally and we will not be able to test whether the server is able to handle resumable uploads.

This is where the --limit-rate argument of curl helps us. This argument can be used to rate limit the patch file request.

curl --request PATCH --data-binary "@/home/naveen/Downloads/mypet.mov" localhost:8080/files/1 --header "Upload-Offset: 0" --header "Expect:" -i --limit-rate 200K  

In the curl request above, we are sending a patch request to the file at location localhost:8080/files/1 and Upload-Offset: 0 and we are rate limiting the request to 200KB/Sec. The contents of mypet.mov is added to the request body. The --header "Expect:" header is needed to prevent curl from sending Expect: 100-continue header. Please read https://gms.tf/when-curl-sends-100-continue.html to know why this is needed.

After issuing the above patch request, the file will be transferred at 200KB/S. Let the request run for a few seconds, say 10 seconds. After approximately 10 seconds, please stop the request by pressing ctrl + c. Now we have terminated the patch request in the middle. The server should have stored the bytes transferred till now. Let's check whether it has done it.

Move to the server logs and you will be able to see the following in the log,

2019/03/31 13:36:00 Received file partially unexpected EOF  
2019/03/31 13:36:00 Size of received file  1589248  
2019/03/31 13:36:00 number of bytes written  1589248  

The size of the received file may be different for you. The above is my output.

hmm looks like it has saved the bytes received till now. But how do we verify it. Well let's check the size of the uploaded file.

ls -al ~/fileserver/1  

Running the above command outputs

-rw-r--r-- 1 naveen naveen 1589248 Mar 31 13:36 /home/naveen/fileserver/1

The size of the file matches the server output. Now we can be 100% sure that the server has saved the bytes it has received. If you try to play the video now, it won't play since the file is still not completely uploaded yet.

The next step is to continue the patch request from where it stopped. We first needed to know the Upload-Offset so that we can issue the next patch request. This is where the head request comes in handy.

curl --head localhost:8080/files/1 -i  

The above curl command will return the Upload-Offset

HTTP/1.1 200 OK  
Upload-Offset: 1589248  
Date: Sun, 31 Mar 2019 08:17:28 GMT  

Note that the offset matches the server logs and the file size.

Now we need to send a PATCH request with the above upload offset. One more concern is we need to send the file data(bytes of the file) from this offset only, not the entire file.

This is where the

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome to tutorial no. 3 in our Resumable file uploader series.

The previous tutorials provided an introduction about the tus protocol and we also created the DB CRUD methods.

In this tutorial, we will create the http handlers to support the POST, PATCH and HEAD http methods.

This tutorial has the following sections

  • POST http handler
  • HEAD http handler
  • PATCH http handler
    • File validation
    • Upload complete validation
    • Upload offset validation
    • Content length validation
    • File patch
POST http handler

Before we create the POST handler, we need a directory to store the files. For simplicity, we are going to create a directory named fileserver inside the home directory to store the files.

const dirName="fileserver"

func createFileDir() (string, error) {  
    u, err := user.Current()
    if err != nil {
        log.Println("Error while fetching user home directory", err)
        return "", err
    }
    home := u.HomeDir
    dirPath := path.Join(home, dirName)
    err = os.MkdirAll(dirPath, 0744)
    if err != nil {
        log.Println("Error while creating file server directory", err)
        return "", err
    }
    return dirPath, nil
}

In the above function, we get the current user's name and home directory and append dirName constant to create the directory. This function will return the path of the newly created directory or errors if any.

This function will be called from main and the dirPath returned from this function will be used by the POST file handler to create the file.

Now that we have the directory ready, let's move to the POST http handler. We will name this handler createFileHandler. The POST http handler is used to create a new file and return the location of the newly created file in the Location header. It is mandatory for the request to contain a Upload-Length header indicating the entire file size.

func (fh fileHandler) createFileHandler(w http.ResponseWriter, r *http.Request) {  
    ul, err := strconv.Atoi(r.Header.Get("Upload-Length"))
    if err != nil {
        e := "Improper upload length"
        log.Printf("%s %s", e, err)
        w.WriteHeader(http.StatusBadRequest)
        w.Write([]byte(e))
        return
    }
    log.Printf("upload length %d\n", ul)
    io := 0
    uc := false
    f := file{
        offset:         &io,
        uploadLength:   ul,
        uploadComplete: &uc,
    }
    fileID, err := fh.createFile(f)
    if err != nil {
        e := "Error creating file in DB"
        log.Printf("%s %s\n", e, err)
        w.WriteHeader(http.StatusInternalServerError)
        return
    }

    filePath := path.Join(fh.dirPath, fileID)
    file, err := os.Create(filePath)
    if err != nil {
        e := "Error creating file in filesystem"
        log.Printf("%s %s\n", e, err)
        w.WriteHeader(http.StatusInternalServerError)
        return
    }
    defer file.Close()
    w.Header().Set("Location", fmt.Sprintf("localhost:8080/files/%s", fileID))
    w.WriteHeader(http.StatusCreated)
    return
}

In line no. 2 we check whether the Upload-Length header is valid. If not we return a Bad Request response.

If the Upload-Length is valid, we create a file in the DB with the provided upload length and with initial offset 0 and upload complete false. Then we create the file in the filesystem and return the location of the file in the Location http header and a 201 created response code.

The dirPath field containing the path to store the file should be added to the fileHandler struct. This field will be updated with the dirPath returned from createFileDir() function later from main(). The updated fileHandler struct is provided below.

type fileHandler struct {  
    db      *sql.DB
    dirPath string
}



HEAD http handler

When a HEAD request is received, we are supposed to return the offset of the file if it exists. If the file does not exist, then we should return a 404 not found response. We will name this handler as fileDetailsHandler.

func (fh fileHandler) fileDetailsHandler(w http.ResponseWriter, r *http.Request) {  
    vars := mux.Vars(r)
    fID := vars["fileID"]
    file, err := fh.File(fID)
    if err != nil {
        w.WriteHeader(http.StatusNotFound)
        return
    }
    log.Println("going to write upload offset to output")
    w.Header().Set("Upload-Offset", strconv.Itoa(*file.offset))
    w.WriteHeader(http.StatusOK)
    return
}

We will use mux router to route the http requests. Please run the command go get github.com/gorilla/mux to fetch the mux router from github.

In line no 3. we get the fileID from the request URL using mux router.

For the purpose of understanding, I have provided the code which will call the above fileDetailsHandler. We will be writing the below line in the main function later.

r.HandleFunc("/files/{fileID:[0-9]+}", fh.fileDetailsHandler).Methods("HEAD")  

This handler will be called when the URL has a valid integer fileID. [0-9]+ is a regular expression which matches one or more digits. If the fileID is valid, it will be stored with the key fileID in a map of type map[string]string . This map can be retrieved by calling the Vars function of the mux router. This is how we get the fileID in line no. 3.

After getting the fileID, we check whether the file exists by calling the File method in line no. 4. Remember we wrote this File Method in the last tutorial. If the file is valid, we return the response with the Upload-Offset header. If not we return a http.StatusNotFound response.

PATCH http handler

The only remaining handler is the PATCH http handler. There are few validations to be done in the PATCH request before we move to the actual file patching. Let's do them first.

File validation

The first step is to make sure the file trying to be uploaded actually exists.

func (fh fileHandler) filePatchHandler(w http.ResponseWriter, r *http.Request) {  
log.Println("going to patch file")  
    vars := mux.Vars(r)
    fID := vars["fileID"]
    file, err := fh.File(fID)
    if err != nil {
        w.WriteHeader(http.StatusNotFound)
        return
    }
}

The above code is similar to the one we wrote in the head http handler. It validates whether the file exists.

Upload complete validation

The next step is to check whether the file has already been uploaded completely.

if *file.uploadComplete == true {  
        e := "Upload already completed" //change to string
        w.WriteHeader(http.StatusUnprocessableEntity)
        w.Write([]byte(e))
        return
    }

If the upload is already complete, we return a StatusUnprocessableEntity status.

Upload offset validation

Each patch request should contain a Upload-Offset header field indicating the current offset of the data and the actual data to be patched to the file should be present in the message body.

off, err := strconv.Atoi(r.Header.Get("Upload-Offset"))  
if err != nil {  
    log.Println("Improper upload offset", err)
    w.WriteHeader(http.StatusBadRequest)
    return
}
log.Printf("Upload offset %d\n", off)  
if *file.offset != off {  
    e := fmt.Sprintf("Expected Offset %d got offset %d", *file.offset, off) 
    w.WriteHeader(http.StatusConflict)
    w.Write([]byte(e))
    return
}

In the above code, we first check whether the Upload-Offset in the request header is valid. If it is not, we return a StatusBadRequest.

In line no. 8, we compare the offset in the table *file.Offset with the one present in the header off. They are expected to be equal. Let's take the example of a file with upload length 250 bytes. If 100 bytes are already uploaded, the upload offset in the database will be 100. Now the server will expect a request with Upload-offset header 100. If they are not equal, we return a StatusConflict header.

Content length validation

The next step is validating the content-length.

clh := r.Header.Get("Content-Length")  
cl, err := strconv.Atoi(clh)  
if err != nil {  
    log.Println("unknown content length")
    w.WriteHeader(http.StatusInternalServerError)
    return
}

if cl != (file.uploadLength - *file.offset) {  
    e := fmt.Sprintf("Content length doesn't not match upload length.Expected content length %d got %d", file.uploadLength-*file.offset, cl)
    log.Println(e)
    w.WriteHeader(http.StatusBadRequest)
    w.Write([]byte(e))
    return
}

Let's say a file is 250 bytes length and the current offset is 150. This indicates that there is 100 more bytes to be uploaded. Hence the Content-Length of the patch request should be exactly 100. This validation is done in line no. 9 of the above code.



File patch

Now comes the fun part. We have done all our validations and ready to patch the file.

body, err := ioutil.ReadAll(r.Body)  
if err != nil {  
    log.Printf("Received file partially %s\n", err)
    log.Println("Size of received file ", len(body))
}
fp := fmt.Sprintf("%s/%s", fh.dirPath, fID)  
f, err := os.OpenFile(fp, os.O_APPEND|os.O_WRONLY, 0644)  
if err != nil {  
    log.Printf("unable to open file %s\n", err)
    w.WriteHeader(http.StatusInternalServerError)
    return
}
defer f.Close()

n, err := f.WriteAt(body, int64(off))  
if err != nil {  
    log.Printf("unable to write %s", err)
    w.WriteHeader(http.StatusInternalServerError)
    return
}
log.Println("number of bytes written ", n)  
no := *file.offset + n  
file.offset = &no

uo := strconv.Itoa(*file.offset)  
w.Header().Set("Upload-Offset", uo)  
if *file.offset == file.uploadLength {  
    log.Println("upload completed successfully")
    *file.uploadComplete = true
}

err = fh.updateFile(file)  
if err != nil {  
    log.Println("Error while updating file", err)
    w.WriteHeader(http.StatusInternalServerError)
    return
}
log.Println("going to send succesfully uploaded response")  
w.WriteHeader(http.StatusNoContent)  

We start reading the message body in line no.1 of the above code. The ReadAll function returns the data it has read until EOF or there is an error. EOF is not considered as an error as ReadAll is expected to read from the source until EOF.

Let's say the patch request disconnects before it is complete. When this happens, ReadAll will return a unexpected EOF error. Usually generic web servers will discard the request if it is incomplete. But we are creating a resumable file uploader and we shouldn't do it. We should patch the file with the data we have received till now.

The length of data received is printed in line no. 4.

In line no. 7 we open the file in append mode if it already exists or create a new file if it doesn't exist.

In line no. 15 we write the request body to the file at the offset provided in the request header. In line no. 23 we update the offset of the file by adding the number of bytes written. In line no. 26 we write the updated offset to the response header.

In line no. 27 we check whether the current offset is equal to the upload lenght. If this is the case then the upload has completed. We set the uploadComplete flag to true.

Finally in line no. 32 we write the updated file details to the database and return a StatusNoContent header indicating that the request is successful.

The entire code along with the main function available in github at https://github.com/golangbot/tusserver. We will need the Postgres driver to run the code. Please fetch the postgres driver running the command go get github.com/lib/pq in the terminal before running the program.

That's about it. We have a working resumable file uploader. In the next tutorial we will test this uploader using curl and dd commands and also discuss about the possible enhancements.

Next tutorial - Testing the server using curl and dd commands

Have a good day.

Like my tutorials? Please show your support by donating. Your donations will help me create more awesome tutorials.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome to tutorial no. 2 in our Resumable file uploader series.

The last tutorial explained how tus protocol works. I strongly recommend reading the previous tutorial if you are new to tus. In this tutorial we will create the data model and the database CRUD methods.

This tutorial has the following sections

  • Data model
  • Table creation
  • Tus Recollection
  • Creating file
  • Updating file
  • Get file
Data model

Let's first discuss the data model for our tus server. We will be using PostgreSQL as the database.

Our tus server needs a table file to store information related to a file. Let's discuss what fields should be in that table.

We need a field to uniquely identify files. To keep things simple, we will use an auto incremented integer field field_id as the file identifier. This field will be the primary key of the table. We will also use this id as the file name.

Next our server needs to keep track of the offset for each file. We will use an integer field field_offset to store the file offset. We will use another integer field file_upload_length to store the upload length of the file.

A boolean field file_upload_complete is used to determine whether the entire file has been uploaded or not.

We will also have the usual audit fields created_at and modified_at

Here is the table schema

file_id SERIAL PRIMARY KEY  
file_offset INT NOT NULL  
file_upload_length INT NOT NULL  
file_upload_complete BOOLEAN NOT NULL  
created_at TIMESTAMP  default NOW() not null  
modified_at TIMESTAMP default NOW() not null  
Table creation

We will first create a database named fileserver and then write code to create the file table.

Please switch to the psql prompt in terminal using the following command

$ \psql -U postgres

You will be prompted to enter the password. After successful login, you can view the postgres command prompt.

postgres=# create database fileserver;  

The above command will create the database fileserver

Now that we have the DB ready, let's go ahead and create the table in code.

type fileHandler struct {  
    db *sql.DB
}

func (fh fileHandler) createTable() error {  
    q := `CREATE TABLE IF NOT EXISTS file(file_id SERIAL PRIMARY KEY, 
           file_offset INT NOT NULL, file_upload_length INT NOT NULL, file_upload_complete BOOLEAN NOT NULL, 
          created_at TIMESTAMP default NOW() NOT NULL, modified_at TIMESTAMP default NOW() NOT NULL)`
    _, err := fh.db.Exec(q)
    if err != nil {
        return err
    }
    log.Println("table create successfully")
    return nil
}

We have a fileHandler struct which contains a single field db which is the handle to the database. This will be injected from main later. In line no. 5 we have added the createTable() method. This method creates the table if it does not exist and returns errors if any.



Tus Recollection

Before we create the DB CRUD methods, let's recollect the http methods used by the tus protocol

POST - To create a new file

PATCH - To upload data to an existing file at offset Upload-Offset

HEAD - To get the current Upload-Offset of the file to start the next patch request from.

We will need the Create, Update and Read table operations to support the above http methods. We will create them in this tutorial.

Creating file

Before we add the method to create the file, let's go ahead and define the file data structure first.

type file struct {  
    fileID         int
    offset         *int
    uploadLength   int
    uploadComplete *bool
}

The file struct above represents a file. It's fields are self explanatory. There is a reason why we have chose pointers types for offset and uploadLength and will be explained later.

We will next add the method to insert a new row into the file table.

func (fh fileHandler) createFile(f file) (string, error) {  
    cfstmt := `INSERT INTO file(file_offset, file_upload_length, file_upload_complete) VALUES($1, $2, $3) RETURNING file_id`
    fileID := 0
    err := fh.db.QueryRow(cfstmt, f.offset, f.uploadLength, f.uploadComplete).Scan(&fileID)
    if err != nil {
        return "", err
    }
    fid := strconv.Itoa(fileID)
    return fid, nil   
}

The above method inserts a row into the file table and converts the fileID to string and returns it. It's pretty straightforward. The reason we are converting the fileID to string is because the fileID is also used as the name of the file later.

Updating file

Let's write the file update method now. In a typical file, we only ever have to update the offset and uploadComplete fields of a file. The fileID and the uploadLength will not change once a file is created. This is also the reason we choose pointers for offset and uploadComplete in the file struct. If offset or uploadComplete is nil, it means that these fields are not set and need not be updated. If we would have chosen value types instead of pointer types for these two fields , if they are not present, still those fields would have their corresponding zero values of 0 and false and we will have no way to find out whether they were actually set or not.

The file update method is provided below.

func (fh fileHandler) updateFile(f file) error {  
    var query []string
    var param []interface{}
    if f.offset != nil {
        of := fmt.Sprintf("file_offset = $1")
        ofp := f.offset
        query = append(query, of)
        param = append(param, ofp)
    }
    if f.uploadComplete != nil {
        uc := fmt.Sprintf("file_upload_complete = $2")
        ucp := f.uploadComplete
        query = append(query, uc)
        param = append(param, ucp)
    }

    if len(query) > 0 {
        mo := "modified_at = $3"
        mop := "NOW()"

        query = append(query, mo)
        param = append(param, mop)

        qj := strings.Join(query, ",")

        sqlq := fmt.Sprintf("UPDATE file SET %s WHERE file_id = $4", qj) 

        param = append(param, f.fileID)

        log.Println("generated update query", sqlq)
        _, err := fh.db.Exec(sqlq, param...) 

        if err != nil {
            log.Println("Error during file update", err)
            return err
        }
    }
    return nil
}

Let me brief how this method works. We have two slices query and param defined in line nos. 2 and 3. We will be appending the update queries to the query slice and the corresponding arguments in the params slice. Finally we will create the update query using the contents of these two slices.

In line no. 4 we check whether offset is nil. If not we add the corresponding update statement to the query slice and the argument to the param slice. We apply similar logic for uploadComplete in line no. 10.

In line no. 17, we check whether the length of query is greater than zero. If it is true, it means that we have a field to be updated. In line no. 18, we then add the query and fields to update the modified_at DB field.

Line no.24 joins the contents of the query slice to create the query.

Let's try to better understand this code using a file struct with fileID 32, offset 100 and uploadComplete false.

The contents of the query and param slice at line no. 17 will be

query = []string{"file_offset = $1", "file_upload_complete = $2"}  
params = []interface{}{100, false}  

The generated update query in line no. 30 will be of the form

UPDATE file SET file_offset = $1, file_upload_complete = $2, modified_at = $3 WHERE file_id = $4  

and the final param slice will be {100, false, NOW(), 32}

We execute the query in line no. 31 and return errors if any.

Get file

The final DB method needed by the tus protocol is a method to return the details of a file when provided with a fileID.

func (fh fileHandler) File(fileID string) (file, error) {  
    fID, err := strconv.Atoi(fileID)
    if err != nil {
        log.Println("Unable to convert fileID to string", err)
        return file{}, err
    }
    log.Println("going to query for fileID", fID)
    gfstmt := `select file_id, file_offset, file_upload_length, file_upload_complete from file where file_id = $1`
    row := fh.db.QueryRow(gfstmt, fID)
    f := file{}
    err = row.Scan(&f.fileID, &f.offset, &f.uploadLength, &f.uploadComplete)
    if err != nil {
        log.Println("error while fetching file", err)
        return file{}, err
    }
    return f, nil
}

In the above method, we return the details of the file when provided with a fileID. It's straight forward.

Now that we are done with the DB methods, the next step would be to create the http handlers. We will do this in the next tutorial.

Next tutorial - Creating http handlers

Like my tutorials? Please show your support by donating. Your donations will help me create more awesome tutorials.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome to tutorial no. 1 in our Resumable file uploader series.

How many times have you tried to upload a large file only to know that it failed because of a network issue! When you re-upload the file again, the upload starts from the beginning :(. Not cool at all. This is where resumable file uploaders come in handy.

Resumable file uploaders allow the file upload to start right from the point where it stopped instead of uploading the whole file again.

In this tutorial series we will learn how to create a resumable file upload server and client in Go using the tus protocol. This tutorial is not an exact implementation of the tus protocol, but rather a simplified version. This tutorial is self sufficient to create a resumable file uploader using Go. We will keep improving this uploader in the upcoming tutorials and make it full tus compatible.

This tutorial has the following sections

  • Tus protocol
  • POST request to create the file
  • PATCH request to update the file
  • HEAD request to get the current file offset
Tus protocol

The tus protocol is quite simple and the best selling point of tus is that it works on top of HTTP. Let's first understand how tus protocol works.

Tus protocol needs three http methods namely POST, PATCH and HEAD. It's best to understand the tus protocol using an example.

Let's take the example of uploading a file of size 250 bytes. The upcoming sections explain the sequence of http calls required to upload a file using tus protocol.

POST request to create the file

This is the first step. The client sends a POST request with the file's upload length(size) to the server. The server creates a new file and responds with the file's location.

Request

POST /files HTTP/1.1  
Host: localhost:8080  
Content-Length: 0  
Upload-Length: 250  

In the above request, we send a POST request to the URL localhost:8080/files to create a file with Upload-length 250 bytes. The Upload-length represents the size of the entire file. Since the request does not have a message body, the Content-Length field is zero.

The server creates the file and returns the following response.

Response

HTTP/1.1 201 Created  
Location: localhost:8080/files/12  

The Location header provides the location of the created file. In our case it is localhost:8080/files/12



PATCH request to update the file

Patch request is used to write bytes to the file at offset Upload-Offset. Each patch request should contain a Upload-Offset field indicating the current offset of the file data being uploaded.

In our case, since we just created a new file and starting to upload data to the file, the client sends a PATCH request with Upload-Offset as 0. Please note that file offsets are zero based. The first byte of the file is at offset 0.

Request

PATCH /files/12 HTTP/1.1  
Host: localhost:8080  
Content-Length: 250  
Upload-Offset: 0

[250 bytes of the file]

In the above request, the Content-Length field is 250 since we are uploading a file of size 250 bytes. The Upload-Offset is 0 indicating that the server should write the contents of the request at byte 1 of the file.

The server will respond with a 204 No Content header indicating the request is successful. Response to the PATCH request should contain the Upload-Offset field indicating the next byte to be uploaded. In this case, the Upload-Offset field will be 250 indicating that the server has received the entire file and the upload is complete.

Response

HTTP/1.1 204 No Content  
Upload-Offset: 250  

The above response from the server indicates that the upload has completed successfully since the Upload-Offset is equal to the Upload-Length 250.

HEAD request to get the current file offset

The patch request above was completed successfully without any network problems and the file was uploaded completely.

What if there was a network issue while the file was being uploaded and the upload failed in the middle. The client should not upload the entire file again but rather start uploading the file from the failed byte. This is where the HEAD request helps.

Let's say the file upload request disconnected after uploading 100 bytes. The client needs to send a HEAD request to the server to get the current Upload-Offset of the file to know how many bytes have been uploaded and how much is still left to be uploaded.

Request

HEAD /files/12 HTTP/1.1  
Host: localhost:8080  

Response

HTTP/1.1 200 OK  
Upload-Offset: 100  

The server responds with the upload offset 100 indicating that the client has to start uploading again from the offset 100. Note that the response to a head request does not contain a message body. It only contains a header.

The client sends a PATCH request with this upload offset and request body containing the remaining 150 bytes

250(file size) - 100(upload offset) = 150 remaining bytes

Request

POST /files/12 HTTP/1.1  
Host: localhost:8080  
Content-Length: 150  
Upload-Offset: 100

[Remaining 150 bytes]

Response

HTTP/1.1 204 No Content  
Upload-Offset: 250  

The server responds with a 204 status and Upload-Offset: 250 equal to Upload-Length indicating the file upload has been uploaded completely.

In case the request again fails in the middle during upload, the client should send a HEAD request followed by PATCH.

The gist is to keep calling HEAD to know the current Upload-Offset followed by PATCH until the server responds with a Upload-Offset equal to Upload-Length.

This brings us to an end of this tutorial. In the next tutorial, we will create the data model for the tus server. Have a good day.

Next tutorial - Implementing DB CRUD methods

Like my tutorials? Please show your support by donating. Your donations will help me create more awesome tutorials.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome to tutorial no. 36 in Golang tutorial series.

In this tutorial we will learn how to write data to files using Go. We will also learn how to write to a file concurrently.

This tutorial has the following sections

  • Writing string to a file
  • Writing bytes to a file
  • Writing data line by line to a file
  • Appending to a file
  • Writing to file concurrently

Please run all the programs of this tutorial in your local system as playground doesn't support file operations properly.

Writing string to a file

One of the most common file writing operation is writing string to a file. This is quite simple to do. It consists of the following steps.

  1. Create the file
  2. Write the string to the file

Let's get to the code right away.

package main

import (  
    "fmt"
    "os"
)

func main() {  
    f, err := os.Create("test.txt")
    if err != nil {
        fmt.Println(err)
        return
    }
    l, err := f.WriteString("Hello World")
    if err != nil {
        fmt.Println(err)
        return
    }
    fmt.Println(l, "bytes written successfully")
    err = f.Close()
    if err != nil {
        fmt.Println(err)
        return
    }
}

The create function in line no. 9 of the program above creates a file named test.txt. If a file with that name already exists, then the create function truncates the file. This function returns a File descriptor.

In line no 14, we write the string Hello World to the file using the WriteString method. This method returns the number of bytes written and error if any.

Finally we close the file in line no. 20.

The above program will print

11 bytes written successfully  

You can find a file named test.txt created in the directory from which this program was executed. If you open the file using any text editor, you can find that it contains the text Hello World.

Writing bytes to a file

Writing bytes to a file is quite similar to writing string to a file. We will use the Write method to write bytes to a file. The following program writes a slice of bytes to a file.

package main

import (  
    "fmt"
    "os"
)

func main() {  
    f, err := os.Create("/home/naveen/bytes")
    if err != nil {
        fmt.Println(err)
        return
    }
    d2 := []byte{104, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100}
    n2, err := f.Write(d2)
    if err != nil {
        fmt.Println(err)
        return
    }
    fmt.Println(n2, "bytes written successfully")
    err = f.Close()
    if err != nil {
        fmt.Println(err)
        return
    }
}

In the program above, in line no. 15 we use the Write method to write a slice of bytes to a file named bytes in the directory /home/naveen. You can change this directory to a different one. The remaining program is self explanatory. This program will print 11 bytes written successfully and it will create a file named bytes. Open the file and you can see that it contains the text hello bytes

Writing strings line by line to a file

Another common file operation is the need to write strings to a file line by line. In this section we will write a program to create a file with the following content.

Welcome to the world of Go.  
Go is a compiled language.  
It is easy to learn Go.  

Let's get to the code right away.

package main

import (  
    "fmt"
    "os"
)

func main() {  
    f, err := os.Create("lines")
    if err != nil {
        fmt.Println(err)
        return
    }
    d := []string{"Welcome to the world of Go1.", "Go is a compiled language.", "It is easy to learn Go."}

    for _, v := range d {
        fmt.Fprintln(f, v)
    }
    err = f.Close()
    if err != nil {
        fmt.Println(err)
        return
    }
    fmt.Println("file written successfully")
}

In line no.9 of the program above, we create a new file named lines. In line no. 16 we iterate through the array using a for range loop and use the Fprintln function to write the lines to a file. The Fprintln function takes a io.writer as parameter and appends a new line, just what we wanted. Running this program will print file written successfully and a file lines will be created in the current directory. The content of the file lines is provided below.

Welcome to the world of Go1.  
Go is a compiled language.  
It is easy to learn Go.  



Appending to a file

In this section we will append one more line to the lines file which we created in the previous section. We will append the line File handling is easy to the lines file.

The file has to be opened in append and write only mode. These flags are passed parameters are passed to the Open function. After the file is opened in append mode, we add the new line to the file.

package main

import (  
    "fmt"
    "os"
)

func main() {  
    f, err := os.OpenFile("lines", os.O_APPEND|os.O_WRONLY, 0644)
    if err != nil {
        fmt.Println(err)
        return
    }
    newLine := "File handling is easy."
    fmt.Fprintln(f, newLine)
    err = f.Close()
    if err != nil {
        fmt.Println(err)
        return
    }
    fmt.Println("file appended successfully")
}

In line no. 9 of the program above, we open the file in append and write only mode. After the file is opened successfully, we add a new line to the file in line no. 15. This program will print file appended successfully. After running this program, the contents of the lines file will be,

Welcome to the world of Go1.  
Go is a compiled language.  
It is easy to learn Go.  
File handling is easy.  
Writing to file concurrently

When multiple goroutines write to a file concurrently, we will end up with a race condition. Hence concurrent writes to a file should be co-ordinated using a channel.

We will write a program that creates 100 goroutines. Each of this goroutine will generate a random number concurrently, thus generating hundred random numbers in total. These random numbers will be written to a file. We will solve this problem by using the following approach.

  1. Create a channel which will be used to read and write the generated random numbers.
  2. Create 100 producer goroutines. Each goroutine will generate a random number and will also write the random number to a channel.
  3. Create a consumer goroutine which will read from the channel and write the generated random number to the file. Thus we have only one goroutine writing to a file concurrently thereby avoiding race condition :)
  4. Close the file once done.

Let's write the produce function first which generates the random numbers.

func produce(data chan int, wg *sync.WaitGroup) {  
    n := rand.Intn(999)
    data <- n
    wg.Done()
}

The function above generates a random number and writes it to the channel data and then calls Done on the waitgroup to notify that it is done with its task.

Let's move to the function which writes to the file now.

func consume(data chan int, done chan bool) {  
    f, err := os.Create("concurrent")
    if err != nil {
        fmt.Println(err)
        return
    }
    defer f.Close()
    for d := range data {
        fmt.Fprintln(f, d)
    }
    done <- true
}

The consume function creates a file named concurrent. It then reads the random numbers from the data channel and writes to the file. Once it has read and written all the random numbers, it writes true to the done channel to notify that it's done with its task.

Let's write the main function and complete this program. I have provided the entire program below.

package main

import (  
    "fmt"
    "math/rand"
    "os"
    "sync"
)

func produce(data chan int, wg *sync.WaitGroup) {  
    n := rand.Intn(999)
    data <- n
    wg.Done()
}

func consume(data chan int, done chan bool) {  
    f, err := os.Create("concurrent")
    if err != nil {
        fmt.Println(err)
        return
    }
    defer f.Close()
    for d := range data {
        fmt.Fprintln(f, d)
    }
    done <- true
}

func main() {  
    data := make(chan int)
    done := make(chan bool)
    wg := sync.WaitGroup{}
    for i := 0; i < 100; i++ {
        wg.Add(1)
        go produce(data, &wg)
    }
    go consume(data, done)
    go func() {
        wg.Wait()
        close(data)
    }()
    <-done
    fmt.Println("File written successfully")
}

The main function creates the data channel from which random numbers are read from and written to in line no. 30. The done channel in line no. 31 is used by the consume goroutine to notify main that it is done with its task. The wg waitgroup in line no. 32 is used to wait for all the 100 goroutines to finish generating random numbers.

The for loop in line no. 33 creates 100 goroutines. The goroutine call in line no. 38 calls wait() on the waitgroup to wait for all 100 goroutines to finish creating random numbers. After that it closes the channel. Once the channel is closed and the consume goroutine has finished writing all generated random numbers to the file, it writes true to the done channel in line no. 26 and the main goroutine is unblocked and prints File written successfully.

Now you can open the file concurrent in any text editor and see the 100 generated random numbers :)

This brings us to an end of this tutorial. Hope you enjoyed reading. Have a great day.



Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Welcome to tutorial no. 35 in Golang tutorial series.

File reading is one of the most common operations performed in any programming language. In this tutorial we will learn about how files can be read using Go.

This tutorial has the following sections.

  • Reading an entire file into memory
    • Using absolute file path
    • Passing the file path as a command line flag
    • Bundling the file inside the binary
  • Reading a file in small chunks
  • Reading a file line by line
Reading an entire file into memory

One of the most basic file operations is reading an entire file into memory. This is done with the help of the ReadFile function of the ioutil package.

Let's read a file from the directory where our go program is located. I have created a folder filehandling inside my GOROOT and inside that I have a text file test.txt which will be read using from our Go program filehandling.go. test.txt contains the text "Hello World. Welcome to file handling in Go.". Here is my folder structure.

src  
    filehandling
                filehandling.go
                test.txt

Let's get to the code right away.

package main

import (  
    "fmt"
    "io/ioutil"
)

func main() {  
    data, err := ioutil.ReadFile("test.txt")
    if err != nil {
        fmt.Println("File reading error", err)
        return
    }
    fmt.Println("Contents of file:", string(data))
}

Please run this program from your local environment as it's not possible to read files in the playground.

Line no. 9 of the program above reads the file and returns a byte slice which is stored in data. In line no. 14 we convert data to a string and display the contents of the file.

Please run this program from the location where test.txt is present.

For example in the case of linux/mac, if test.txt is located at home/naveen/go/src/filehandling, then run this program using the following steps,

$]cd /home/naveen/go/src/filehandling/
$]go install filehandling
$]workspacepath/bin/filehandling

In the case of windows, if test.txt is located at C:\Users\naveen.r\go\src\filehandling, then run this program using the following steps,

> cd C:\Users\naveen.r\go\src\filehandling
> go install filehandling
> workspacepath\bin\filehandling.exe 

This program will output,

Contents of file: Hello World. Welcome to file handling in Go.  

If this program is run from any other location, for instance try running the program from /home/userdirectory, it will print the following error.

File reading error open test.txt: The system cannot find the file specified.  

The reason is Go is a compiled language. What go install does is, it creates a binary from the source code. The binary is independent of the source code and it can be run from any location. Since test.txt is not found in the location from which the binary is run, the program complains that it cannot find the file specified.

There are three ways to approach this problem,

  1. Using absolute file path
  2. Passing the file path as a command line flag
  3. Bundling the text file along with the binary

Let's discuss them one by one.



1. Using absolute file path

The simplest way to solve this problem is to pass the absolute file path. I have modified the program and changed the path to an absolute one.

package main

import (  
    "fmt"
    "io/ioutil"
)

func main() {  
    data, err := ioutil.ReadFile("/home/naveen/go/src/filehandling/test.txt")
    if err != nil {
        fmt.Println("File reading error", err)
        return
    }
    fmt.Println("Contents of file:", string(data))
}

Now the program can be run from any location and it will print the contents of test.txt.

For example, it will work even when I run it from my home directory

$]cd $HOME
$]go install filehandling
$]workspacepath/bin/filehandling

The program will print the contents of test.txt

This seems to be an easy way but comes with the pitfall that the file should be located in the path specified in the program else this method will fail.

2. Passing the file path as a command line flag

Another way to solve this problem is to pass the file path as a command line flag. Using the flag package, we can get the file path as input from the command line and then read its contents.

Let's first understand how the flag package works. The flag package has a String function. This function accepts 3 arguments. The first is the name of the flag, second is the default value and the third is a short description of the flag.

Let's write a small program to read the file name from the command line. Replace the contents of filehandling.go with the following,

package main  
import (  
    "flag"
    "fmt"
)

func main() {  
    fptr := flag.String("fpath", "test.txt", "file path to read from")
    flag.Parse()
    fmt.Println("value of fpath is", *fptr)
}

Line no. 8 of the program above, creates a string flag named fpath with default value test.txt and description file path to read from using the String function. This function returns the address of the string variable that stores the value of the flag.

flag.Parse() should be called before any flag is accessed by the program.

We print the value of the flag in line no. 10

When this program is run using using the command

workspacepath/bin/filehandling -fpath=/home/naveen/go/src/filehandling/test.txt  

we pass /home/naveen/go/src/filehandling/test.txt as the value of the flag fpath.

This program outputs

value of fpath is /home/naveen/go/src/filehandling/test.txt  

If the program is run using just filehandling without passing any fpath, it will print

value of fpath is test.txt  

since test.txt is the default value of fpath.

Now that we know how to read the file path from the command line, let's go ahead and finish our file reading program.

package main  
import (  
    "flag"
    "fmt"
    "io/ioutil"
)

func main() {  
    fptr := flag.String("fpath", "test.txt", "file path to read from")
    flag.Parse()
    data, err := ioutil.ReadFile(*fptr)
    if err != nil {
        fmt.Println("File reading error", err)
        return
    }
    fmt.Println("Contents of file:", string(data))
}

The program above reads the content of the file path passed from the command line. Running this program using the command

workspacepath/bin/filehandling -fpath=/home/naveen/go/src/filehandling/test.txt  

will print

Contents of file: Hello World. Welcome to file handling in Go.  
3. Bundling the text file along with the binary

The above option of getting the file path from command line is good but there is an even better way to solve this problem. Wouldn't it be awesome if we are able to bundle the text file along with our binary. This is what we are going to do next.

There are various packages that help us achieve this. We will be using packr because it's quite simple and I have been using it for my projects without any problems.

The first step is to install the packr package.

Type the following command in the command prompt to install the package

go get -u github.com/gobuffalo/packr/...  

packr converts static files such as .txt to .go files which are then embedded directly into the binary. Packer is intelligent enough to fetch the static files from disk rather than from the binary during development. This prevents the need for re-compilation during development when only static files change.

A program will make us understand things better. Replace the contents of filehandling.go with the following,

package main

import (  
    "fmt"

    "github.com/gobuffalo/packr"
)

func main() {  
    box := packr.NewBox("../filehandling")
    data := box.String("test.txt")
    fmt.Println("Contents of file:", data)
}

In line no. 10 of the program above, we are creating a New Box. A box represents a folder whose contents will be embedded to the binary. In this case I am specifying the filehandling folder which contains test.txt. In the next line we read the contents of the file and print it.

When we are in development phase, we can use the go install command to run this program. It will work as expected. packr is intelligent enough to load the file from disk during development phase.

Run the program using the following commands.

go install filehandling  
workspacepath/bin/filehandling  

These commands can be run from any location. Packr is intelligent enough to get the absolute path of the directory passed to the NewBox command.

This program will print

Contents of file: Hello World. Welcome to file handling in Go.  

Try changing the contents of test.txt and run filehandling again. You can see that the program prints the updated contents of test.txt without the need for any recompilation. Perfect :).

Now let's move to the step and bundle test.txt to our binary. We use the packr command to do this.

Run the following command

packr install -v filehandling  

This will print

building box ../filehandling  
packing file filehandling.go  
packed file filehandling.go  
packing file test.txt  
packed file test.txt  
built box ../filehandling with ["filehandling.go" "test.txt"]  
filehandling  

This command bundles the static file along with the binary.

After running the above command, run the program using the command workspacepath/bin/filehandling. The program will print the contents of test.txt. Now test.txt is being read from the binary.

If you doubt whether the file is served from within the binary or from disk, I suggest that you delete test.txt and run the command filehandling again. You can see that test.txt's contents are printed. Awesome :D We have successfully embedded static files to our binary.



Reading a file in small chunks

In the last section, we learnt how to load an entire file into memory. When the size of the file is extremely large it doesn't make sense to read the entire file into memory especially if you are running low on RAM. A more optimal way is to read the file in small chunks. This can be done with the help of the bufio package.

Let's write a program that reads our test.txt file in chunks of 3 bytes. Replace the contents of filehandling.go with the following,

package main

import (  
    "bufio"
    "flag"
    "fmt"
    "log"
    "os"
)

func main() {  
    fptr := flag.String("fpath", "test.txt", "file path to read from")
    flag.Parse()

    f, err := os.Open(*fptr)
    if err != nil {
        log.Fatal(err)
    }
    defer func() {
        if err = f.Close(); err != nil {
            log.Fatal(err)
        }
    }()
    r := bufio.NewReader(f)
    b := make([]byte, 3)
    for {
        _, err := r.Read(b)
        if err != nil {
            fmt.Println("Error reading file:", err)
            break
        }
        fmt.Println(string(b))
    }
}

In line no. 15 of the program above, we open the file using the path passed from the command line flag.

In line no. 19, we defer the file closing.

Line no. 24 of the program above creates a new buffered reader. In the next line, we create a byte slice of length and capacity 3 into which the bytes of the file will be read.

The Read method in line no. 27 reads up to len(b) bytes i.e up to 3 bytes and returns the number of bytes read. Once the end of file is reached, it will return a EOF error. The rest of the program is straight forward.

If we run the program above using the commands,

$] go install filehandling
$] workspacepath/bin/filehandling -fpath=/home/naveen/go/src/filehandling/test.txt

the following will be output

Hel  
lo  
Wor  
ld.  
 We
lco  
me  
to  
fil  
e h  
and  
lin  
g i  
n G  
o.  
Error reading file: EOF  
Reading a file line by line

In the section we will discuss how to read a file line by line using Go. This can done using the bufio package.

Please replace the contents in test.txt with the following

Hello World. Welcome to file handling in Go.  
This is the second line of the file.  
We have reached the end of the file.  

The following are the steps involved in reading a file line by line.

  1. Open the file
  2. Create a new scanner from the file
  3. Scan the file and read it line by line.

Replace the contents of filehandling.go with the following

package main

import (  
    "bufio"
    "flag"
    "fmt"
    "log"
    "os"
)

func main() {  
    fptr := flag.String("fpath", "test.txt", "file path to read from")
    flag.Parse()

    f, err := os.Open(*fptr)
    if err != nil {
        log.Fatal(err)
    }
    defer func() {
        if err = f.Close(); err != nil {
        log.Fatal(err)
    }
    }()
    s := bufio.NewScanner(f)
    for s.Scan() {
        fmt.Println(s.Text())
    }
    err = s.Err()
    if err != nil {
        log.Fatal(err)
    }
}

In line no. 15 of the program above, we open the file using the path passed from the command line flag. In line no. 24, we create a new scanner using the file. The scan() method in line no. 25 reads the next line of the file which will be available through the text() method.

After Scan returns false, the Err() method will return any error that occurred during scanning, except that if it was End of File, Err() will return nil.

If we run the program above using the commands,

$] go install filehandling
$] workspacepath/bin/filehandling -fpath=/home/naveen/go/src/filehandling/test.txt

It will output

Hello World. Welcome to file handling in Go.  
This is the second line of the file.  
We have reached the end of the file.  

This brings us to an end of this tutorial. Hope you enjoyed it. Have a good day.

Like my tutorials? Please show your support by donating. Your donations will help me create more awesome tutorials.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Golangbot | Golang tutorial by Naveen Ramanathan - 1y ago

Welcome to tutorial no. 34 in Golang tutorial series.

Reflection is one of the advanced topics in Go. I will try to make it as simple as possible.

This tutorial has the following sections.

  • What is reflection?
  • What is the need to inspect a variable and find its type?
  • reflect package
    • reflect.Type and reflect.Value
    • reflect.Kind
    • NumField() and Field() methods
    • Int() and String() methods
  • Complete program
  • Should reflection be used?

Let's discuss these sections one by one now.

What is reflection?

Reflection is the ability of a program to inspect its variables and values at run time and find their type. You might not understand what this means but that's alright. You will get a clear understanding of reflection by the end of this tutorial, so stay with me.

What is the need to inspect a variable and find its type?

The first question anyone gets when learning about reflection is why do we even need to inspect a variable and find its type at runtime when each and every variable in our program is defined by us and we know it's type at compile time itself. Well this is true most of the times, but not always.

Let me explain what I mean. Let's write a simple program.

package main

import (  
    "fmt"
)

func main() {  
    i := 10
    fmt.Printf("%d %T", i, i)
}

Run in playground

In the program above, the type of i is known at compile time and we print it in the next line. Nothing magical here.

Now let's understand the need to know the type of a variable at run time. Let's say we want to write a simple function which will take a struct as argument and will create a SQL insert query using it.

Consider the following program,

package main

import (  
    "fmt"
)

type order struct {  
    ordId      int
    customerId int
}

func main() {  
    o := order{
        ordId:      1234,
        customerId: 567,
    }
    fmt.Println(o)
}

Run in playground

We need to write a function which will take the struct o in the program above as an argument and return the following SQL insert query,

insert into order values(1234, 567)  

This function is simple to write. Lets do that now.

package main

import (  
    "fmt"
)

type order struct {  
    ordId      int
    customerId int
}

func createQuery(o order) string {  
    i := fmt.Sprintf("insert into order values(%d, %d)", o.ordId, o.customerId)
    return i
}

func main() {  
    o := order{
        ordId:      1234,
        customerId: 567,
    }
    fmt.Println(createQuery(o))
}

Run in playground

The createQuery function in line no. 12 creates the insert query by using the ordId and customerId fields of o. This program will output,

insert into order values(1234, 567)  



Now lets take our query creator to the next level. What if we want to generalize our query creator and make it work on any struct. Let me explain what I mean using a program.

package main

type order struct {  
    ordId      int
    customerId int
}

type employee struct {  
    name string
    id int
    address string
    salary int
    country string
}

func createQuery(q interface{}) string {  
}

func main() {

}

Our objective is to finish the createQuery function in line no. 16 of the above program so that it takes any struct as argument and creates an insert query based on the struct fields.

For example, if we pass the struct below,

o := order {  
    ordId: 1234,
    customerId: 567
}

Our createQuery function should return,

insert into order values (1234, 567)  

Similarly if we pass

 e := employee {
        name: "Naveen",
        id: 565,
        address: "Science Park Road, Singapore",
        salary: 90000,
        country: "Singapore",
    }

it should return,

insert into employee values("Naveen", 565, "Science Park Road, Singapore", 90000, "Singapore")  

Since the createQuery function should work with any struct, it takes a interface{} as argument. For simplicity, we will only deal with structs that contain fields of type string and int but this can be extended for any type.

The createQuery function should work on any struct. The only way to write this function is to examine the type of the struct argument passed to it at run time, find its fields and then create the query. This is where reflection is useful. In the next steps of the tutorial, we will learn how we can achieve this using the reflect package.

reflect package

The reflect package implements run-time reflection in Go. The reflect package helps to identify the underlying concrete type and the value of a interface{} variable. This is exactly what we need. The createQuery function takes a interface{} argument and the query needs to be created based on the concrete type and value of the interface{} argument. This is exactly what the reflect package helps in doing.

There are a few types and methods in the reflect package which we need to know first before writing our generic query generator program. Lets look at them one by one.

reflect.Type and reflect.Value

The concrete type of interface{} is represented by reflect.Type and the underlying value is represented by reflect.Value. There are two functions reflect.TypeOf() and reflect.ValueOf() which return the reflect.Type and reflect.Value respectively. These two types are the base to create our query generator. Let's write a simple example to understand these two types.

package main

import (  
    "fmt"
    "reflect"
)

type order struct {  
    ordId      int
    customerId int
}

func createQuery(q interface{}) {  
    t := reflect.TypeOf(q)
    v := reflect.ValueOf(q)
    fmt.Println("Type ", t)
    fmt.Println("Value ", v)


}
func main() {  
    o := order{
        ordId:      456,
        customerId: 56,
    }
    createQuery(o)

}

Run in playground

In the program above, the createQuery function in line no. 13 takes a interface{} as argument. The function reflect.TypeOf in line no. 14 takes a interface{} as argument and returns the reflect.Type containing the concrete type of the interface{} argument passed. Similarly the reflect.ValueOf function in line no. 15 takes a interface{} as argument and returns the reflect.Value which contains the underlying value of the interface{} argument passed.

The above program prints,

Type  main.order  
Value  {456 56}  

From the output, we can see that the program prints the concrete type and the value of the interface.



reflect.Kind

There is one more important type in the reflection package called Kind.

The types Kind and Type in the reflection package might seem similar but they have a difference which will be clear from the program below.

package main

import (  
    "fmt"
    "reflect"
)

type order struct {  
    ordId      int
    customerId int
}

func createQuery(q interface{}) {  
    t := reflect.TypeOf(q)
    k := t.Kind()
    fmt.Println("Type ", t)
    fmt.Println("Kind ", k)


}
func main() {  
    o := order{
        ordId:      456,
        customerId: 56,
    }
    createQuery(o)

}

Run in playground

The program above outputs,

Type  main.order  
Kind  struct  

I think you will now be clear about the differences between the two. Type represents the actual type of the interface{}, in this case main.Order and Kind represents the specific kind of the type. In this case, its a struct.

NumField() and Field() methods

The NumField() method returns the number of fields in a struct and the Field(i int) method returns the reflect.Value of the ith field.

package main

import (  
    "fmt"
    "reflect"
)

type order struct {  
    ordId      int
    customerId int
}

func createQuery(q interface{}) {  
    if reflect.ValueOf(q).Kind() == reflect.Struct {
        v := reflect.ValueOf(q)
        fmt.Println("Number of fields", v.NumField())
        for i := 0; i < v.NumField(); i++ {
            fmt.Printf("Field:%d type:%T value:%v\n", i, v.Field(i), v.Field(i))
        }
    }

}
func main() {  
    o := order{
        ordId:      456,
        customerId: 56,
    }
    createQuery(o)
}

Run in playground

In the program above, in line no. 14 we first check whether the Kind of q is a struct because the NumField method works only on struct. The rest of the program is self explanatory. This program outputs,

Number of fields 2  
Field:0 type:reflect.Value value:456  
Field:1 type:reflect.Value value:56  
Int() and String() methods

The methods Int and String help extract the reflect.Value as an int64 and string respectively.

package main

import (  
    "fmt"
    "reflect"
)

func main() {  
    a := 56
    x := reflect.ValueOf(a).Int()
    fmt.Printf("type:%T value:%v\n", x, x)
    b := "Naveen"
    y := reflect.ValueOf(b).String()
    fmt.Printf("type:%T value:%v\n", y, y)

}

Run in playground

In the program above, in line no. 10, we extract the reflect.Value as an int64 and in line no. 13, we extract it as string. This program prints,

type:int64 value:56  
type:string value:Naveen  
Complete Program

Now that we have enough knowledge to finish our query generator, lets go ahead and do it.

package main

import (  
    "fmt"
    "reflect"
)

type order struct {  
    ordId      int
    customerId int
}

type employee struct {  
    name    string
    id      int
    address string
    salary  int
    country string
}

func createQuery(q interface{}) {  
    if reflect.ValueOf(q).Kind() == reflect.Struct {
        t := reflect.TypeOf(q).Name()
        query := fmt.Sprintf("insert into %s values(", t)
        v := reflect.ValueOf(q)
        for i := 0; i < v.NumField(); i++ {
            switch v.Field(i).Kind() {
            case reflect.Int:
                if i == 0 {
                    query = fmt.Sprintf("%s%d", query, v.Field(i).Int())
                } else {
                    query = fmt.Sprintf("%s, %d", query, v.Field(i).Int())
                }
            case reflect.String:
                if i == 0 {
                    query = fmt.Sprintf("%s\"%s\"", query, v.Field(i).String())
                } else {
                    query = fmt.Sprintf("%s, \"%s\"", query, v.Field(i).String())
                }
            default:
                fmt.Println("Unsupported type")
                return
            }
        }
        query = fmt.Sprintf("%s)", query)
        fmt.Println(query)
        return

    }
    fmt.Println("unsupported type")
}

func main() {  
    o := order{
        ordId:      456,
        customerId: 56,
    }
    createQuery(o)

    e := employee{
        name:    "Naveen",
        id:      565,
        address: "Coimbatore",
        salary:  90000,
        country: "India",
    }
    createQuery(e)
    i := 90
    createQuery(i)

}

Run in playground

In line no. 22, we first check whether the passed argument is a struct. In line no. 23 we get the name of the struct from it's reflect.Type using the Name() method. In the next line, we use t and start creating the query.

The case statement in line. 28 checks whether the current field is reflect.Int, if that's the case we extract the value of that field as int64 using the Int() method. The if else statement is used to handle edge cases. Please add logs to understand why it is needed. Similar logic is used to extract the string in line no. 34.

We have also added checks to prevent the program from crashing when unsupported types are passed to the createQuery function. The rest of the program is self explanatory. I recommend adding logs at appropriate places and checking their output to understand this program better.

This program prints,

insert into order values(456, 56)  
insert into employee values("Naveen", 565, "Coimbatore", 90000, "India")  
unsupported type  

I would leave it as an exercise for the reader to add the field names to the output query. Please try changing the program to print query of the format,

insert into order(ordId, customerId) values(456, 56)  
Should reflection be used?

Having shown a practical use of reflection, now comes the real question. Should you be using reflection? I would like to quote Rob Pike's proverb on the use of reflection which answers this question.

Clear is better than clever. Reflection is never clear.

Reflection is a very powerful and advanced concept in Go and it should be used with care. It is very difficult to write clear and maintainable code using reflection. It should be avoided wherever possible and should be used only when absolutely necessary.

This brings us to and end of this tutorial. Hope you enjoyed it. Have a good day.

Like my tutorials? Please show your support by donating. Your donations will help me create more awesome tutorials.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview