قالب وردپرس درنا توس
Home / IOS Development / Read and share, a new selection of the iOS 13 app in Swift

Read and share, a new selection of the iOS 13 app in Swift

I've been working on a project that I aim to release with iOS 13 later this year, and I've decided to create some building logs with interesting features or new things I learn here. I talked a little about it on twitter:

The idea for Read & share stems from a) my interest in using some new features from iOS 1

3 in production and b) my newfound reading time during my commute where i wanted to share what i read on Twitter et al but did not have the tools to do it – not all of us can have the Notes.app screen aesthetically.

This series will be a mix of how I build features that I am familiar with, as well as experiments with the newer features iOS 13 and Xcode 11 that we are all unfamiliar with.

Even experienced iOS engineers are newcomers to SwiftUI and Combine, and the incredible field of work with new features shows how healthy even the basics are for everyone.

Let's get right to the first building log:

The basic user interface here that everything else feeds in and out of is the selection screen, so that's where I start the app. There are many peices that I know how to do already (but maybe not in iOS 13, who knows!) This is at least one peice that I want to eat a lot, so I might as well get a first version in.

Text Comes into the app in different ways – sharing existing highlights from e-readers, copying and pasting pieces of text and even taking camera pictures from physical books – and it all hits the highlight screen where you can choose the part you want to share. After that, you can tweak the book source or play with the part style, but all these other elements flow through this one interface that must be intuitively understandable through a variety of uses.

 highlighting flow

I started working on this exact interface in SwiftUI and realized that I didn't know anything about it and started it again in the UIKit where I was much more familiar. Eventually, I would like to rebuild all of this in the SwiftUI, but I've settled on building the simple things (Drawers! Navigation! Tabs!) In the SwiftUI and giving myself respite on the custom UIKit user interface for now. It's one of the nice parts about SwiftUI: you're not completely cut off from the UIKit if you don't want to be, but there are some boiler plates that connect the two. We will most likely cover this in a future post.

Making selections

The end goal here is to make it easy to tap and drag to select text, which sounds easy, but there are several steps to be able to do this easily:

  1. Few limits for each words
  2. Few pressure points
  3. Manage word selection
  4. Draw stylized highlight layers

Support for finding text boundaries in UITextView is pretty good, so I selected it for the basic text view. I started using firstRect (forRange :) to find prints for each word that can be selected.

Getting our records requires a string Range which is not quite the same as a standard index. You can update your Swift string knowledge here, but the short version is that we need to take some extra steps to finally get to a Range that we can use to get our words.

Initially, I implemented this with the first method I saw, range (of: string) and it was a good starting point for validating how rects looked so that we could use them both as a basis for the highlight shapes and to decide whether cranes have taken a word. Finally, even though we needed to generate these sites for each word, not only the first occurrence of a word like the simple area (of: string) will give us.

Two suboptimal parts here: first, Scanner is not as fast as we want, but one points to an optional NSString ie. & nsstring? will do the job when the documents say they are looking for AutoreleasingUnsafeMutablePointer . Second, this code is not very unicode-safe as it is. I do some characterization here that is not directly compatible with how String tries to simplify complex multifunction glyphs to String.Index . I will continue to refine this component during this process, and one of these steps includes checking unicode support. So far it will be fine.

The whole block scans up to the next white area, gets the start and end positions (like OUTPUT ) for each word, uses it to get a UITextRange which is again used to get one CGRect for that word. The text is static when in the cursor (for now), so the calculation of everything in advance ensures that we have all the data we need for the rest of our highlighting step.

  func    loadRects  (fromTextView textView: UITextView) {
 was  rects: [WordRect] = []
      was  currentScanPosition =  0 
      let  scanner = Scanner (string: textView.text)
 man  !  scanner.isAtEnd {
 was  next Word: NSString?
scanner.scanUpToCharacters (from: .whitespacesAndNewlines, to: & nextWord)
 guard    la  existingNestext = next word  other  { back }

 la  startPosition = textView.position (from: textView.startingOfDocument, offset: currentScanPosition)
 let  endPosition = textView.position (from: textView.beginningOfDocument, offset: currentScanPosition  +  existingNextword.length)

 if  textRange = textView.textRange (from: startPosition !  to: endPosition ! ) {
 rect = trimmedRectFromTextContainer (textView.firstRect ( for : textRange))
rects.append (WordRect (withRect: rect, andText: existingNextword  as  String))

currentScanPosition  + =  existingNext word.length  +    1 

 itself  .wordRects = rects

When I have the word left, cranes are sent to the elector who uses optional rules. If you press the first word and the last word, the app should mark all the words in the middle for you – this logic and more handled in the selection manager.

Finally, the display controller takes the choices and, knowing a little about the rules for how text can be selected, makes custom CAShapeLayer appears in the back layer UITextView .

 Height Process [19659005] The distinction between what happens in the selection manager and the display controller is at the display level. The electoral officer does not need to know anything about the design of the screen, just the basic rules for choosing text. The parent display controller can handle both a conversion from cranes → the targeted hits, as well as selected reports → mark team placements.

Questions or comments? Find us on twitter or submit a problem on github

Source link