Source: golang-github-blevesearch-segment
Section: devel
Priority: extra
Maintainer: Debian Go Packaging Team <pkg-go-maintainers@lists.alioth.debian.org>
Uploaders: Michael Lustfield <michael@lustfield.net>
Build-Depends: debhelper (>= 10),
               dh-golang,
               golang-any
Standards-Version: 3.9.8
Homepage: https://github.com/blevesearch/segment
Vcs-Browser: https://anonscm.debian.org/cgit/pkg-go/packages/golang-github-blevesearch-segment.git
Vcs-Git: https://anonscm.debian.org/git/pkg-go/packages/golang-github-blevesearch-segment.git
XS-Go-Import-Path: github.com/blevesearch/segment

Package: golang-github-blevesearch-segment-dev
Architecture: all
Depends: ${shlibs:Depends},
         ${misc:Depends}
Description: A Go library for performing Unicode Text Segmentation as described in Unicode Standard Annex #29
 segment A Go library for performing Unicode Text Segmentation as described
 in Unicode Standard Annex #29 (http://www.unicode.org/reports/tr29/)
 Features• Currently only segmentation at Word Boundaries is
 supported.License Apache License Version 2.0 Usage The functionality is
 exposed in two ways: • You can use a bufio.Scanner with the SplitWords
 implementation of SplitFunc.  The SplitWords function will identify
 the appropriate word boundaries in the input text and the Scanner will
 return tokens at the appropriate place.  scanner := bufio.NewScanner(...)
 scanner.Split(segment.SplitWords) for scanner.Scan() {
     tokenBytes := scanner.Bytes()
 } if err := scanner.Err(); err != nil {
     t.Fatal(err)
 }• Sometimes you would also like information returned about the type
 of token.  To do this we have introduce a new type named Segmenter.
 It works just like Scanner but additionally a token type is returned.
 segmenter := segment.NewWordSegmenter(...)  for segmenter.Segment() {
     tokenBytes := segmenter.Bytes()) tokenType := segmenter.Type()
 } if err := segmenter.Err(); err != nil {
     t.Fatal(err)
 }Choosing Implementation By default segment does NOT use the fastest
 runtime implementation.  The reason is that it adds approximately 5s
 to compilation time and may require more than 1GB of ram on the machine
 performing compilation.
 .
 However, you can choose to build with the fastest runtime implementation
 by passing the build tag as follows:
     -tags 'prod'
 Generating Code Several components in this package are generated.
 • Several Ragel rules files are generated from Unicode properties
 files.• Ragel machine is generated from the Ragel rules.• Test tables
 are generated from the Unicode test files.  All of these can be generated
 by running:
     go generate
 Fuzzing There is support for fuzzing the segment library with go-fuzz
 (https://github.com/dvyukov/go-fuzz).  • Install go-fuzz if you
 haven't already: go get github.com/dvyukov/go-fuzz/go-fuzz go get
 github.com/dvyukov/go-fuzz/go-fuzz-build• Build the package with
 go-fuzz: go-fuzz-build github.com/blevesearch/segment• Convert the
 Unicode provided test cases into the initial corpus for go-fuzz: go
 test -v -run=TestGenerateWordSegmentFuzz -tags gofuzz_generate• Run
 go-fuzz: go-fuzz -bin=segment-fuzz.zip -workdir=workdirStatus Build Status
 (https://travis-ci.org/blevesearch/segment)
 .
 Coverage Status (https://coveralls.io/r/blevesearch/segment?branch=master)
 .
 GoDoc (https://godoc.org/github.com/blevesearch/segment)
